00:00:00.001 Started by upstream project "autotest-per-patch" build number 127173 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.099 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.100 using credential 00000000-0000-0000-0000-000000000002 00:00:00.102 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.138 Fetching changes from the remote Git repository 00:00:00.140 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.171 Using shallow fetch with depth 1 00:00:00.171 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.171 > git --version # timeout=10 00:00:00.205 > git --version # 'git version 2.39.2' 00:00:00.205 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.226 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.226 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:08.097 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:08.107 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:08.117 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:08.117 > git config core.sparsecheckout # timeout=10 00:00:08.127 > git read-tree -mu HEAD # timeout=10 00:00:08.142 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:08.157 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:08.157 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:08.238 [Pipeline] Start of Pipeline 00:00:08.287 [Pipeline] library 00:00:08.289 Loading library shm_lib@master 00:00:08.289 Library shm_lib@master is cached. Copying from home. 00:00:08.307 [Pipeline] node 00:00:08.320 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.321 [Pipeline] { 00:00:08.331 [Pipeline] catchError 00:00:08.332 [Pipeline] { 00:00:08.343 [Pipeline] wrap 00:00:08.350 [Pipeline] { 00:00:08.355 [Pipeline] stage 00:00:08.356 [Pipeline] { (Prologue) 00:00:08.522 [Pipeline] sh 00:00:08.807 + logger -p user.info -t JENKINS-CI 00:00:08.825 [Pipeline] echo 00:00:08.827 Node: GP11 00:00:08.837 [Pipeline] sh 00:00:09.138 [Pipeline] setCustomBuildProperty 00:00:09.151 [Pipeline] echo 00:00:09.153 Cleanup processes 00:00:09.159 [Pipeline] sh 00:00:09.446 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.447 705103 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.460 [Pipeline] sh 00:00:09.745 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.745 ++ grep -v 'sudo pgrep' 00:00:09.745 ++ awk '{print $1}' 00:00:09.745 + sudo kill -9 00:00:09.745 + true 00:00:09.763 [Pipeline] cleanWs 00:00:09.774 [WS-CLEANUP] Deleting project workspace... 00:00:09.774 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.781 [WS-CLEANUP] done 00:00:09.786 [Pipeline] setCustomBuildProperty 00:00:09.798 [Pipeline] sh 00:00:10.078 + sudo git config --global --replace-all safe.directory '*' 00:00:10.163 [Pipeline] httpRequest 00:00:10.205 [Pipeline] echo 00:00:10.207 Sorcerer 10.211.164.101 is alive 00:00:10.215 [Pipeline] httpRequest 00:00:10.220 HttpMethod: GET 00:00:10.221 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.222 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.233 Response Code: HTTP/1.1 200 OK 00:00:10.233 Success: Status code 200 is in the accepted range: 200,404 00:00:10.234 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:12.569 [Pipeline] sh 00:00:12.856 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:12.872 [Pipeline] httpRequest 00:00:12.898 [Pipeline] echo 00:00:12.899 Sorcerer 10.211.164.101 is alive 00:00:12.906 [Pipeline] httpRequest 00:00:12.910 HttpMethod: GET 00:00:12.910 URL: http://10.211.164.101/packages/spdk_d3d267b545ef8c74ca8c4321db78e07e6c6d1faa.tar.gz 00:00:12.911 Sending request to url: http://10.211.164.101/packages/spdk_d3d267b545ef8c74ca8c4321db78e07e6c6d1faa.tar.gz 00:00:12.928 Response Code: HTTP/1.1 200 OK 00:00:12.929 Success: Status code 200 is in the accepted range: 200,404 00:00:12.929 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d3d267b545ef8c74ca8c4321db78e07e6c6d1faa.tar.gz 00:01:00.136 [Pipeline] sh 00:01:00.430 + tar --no-same-owner -xf spdk_d3d267b545ef8c74ca8c4321db78e07e6c6d1faa.tar.gz 00:01:02.983 [Pipeline] sh 00:01:03.270 + git -C spdk log --oneline -n5 00:01:03.270 d3d267b54 lib/reduce: if memory allocation fails, g_vol_count--. 00:01:03.270 c5d7cded4 bdev/compress: print error code information in load compress bdev 00:01:03.270 58883cba9 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:01:03.270 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:03.270 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:03.283 [Pipeline] } 00:01:03.299 [Pipeline] // stage 00:01:03.310 [Pipeline] stage 00:01:03.313 [Pipeline] { (Prepare) 00:01:03.331 [Pipeline] writeFile 00:01:03.351 [Pipeline] sh 00:01:03.638 + logger -p user.info -t JENKINS-CI 00:01:03.652 [Pipeline] sh 00:01:03.944 + logger -p user.info -t JENKINS-CI 00:01:03.955 [Pipeline] sh 00:01:04.238 + cat autorun-spdk.conf 00:01:04.238 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.238 SPDK_TEST_NVMF=1 00:01:04.238 SPDK_TEST_NVME_CLI=1 00:01:04.238 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.238 SPDK_TEST_NVMF_NICS=e810 00:01:04.238 SPDK_TEST_VFIOUSER=1 00:01:04.238 SPDK_RUN_UBSAN=1 00:01:04.238 NET_TYPE=phy 00:01:04.247 RUN_NIGHTLY=0 00:01:04.252 [Pipeline] readFile 00:01:04.278 [Pipeline] withEnv 00:01:04.280 [Pipeline] { 00:01:04.294 [Pipeline] sh 00:01:04.607 + set -ex 00:01:04.607 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.607 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.607 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.607 ++ SPDK_TEST_NVMF=1 00:01:04.607 ++ SPDK_TEST_NVME_CLI=1 00:01:04.607 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.607 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.607 ++ SPDK_TEST_VFIOUSER=1 00:01:04.607 ++ SPDK_RUN_UBSAN=1 00:01:04.607 ++ NET_TYPE=phy 00:01:04.607 ++ RUN_NIGHTLY=0 00:01:04.607 + case $SPDK_TEST_NVMF_NICS in 00:01:04.607 + DRIVERS=ice 00:01:04.607 + [[ tcp == \r\d\m\a ]] 00:01:04.607 + [[ -n ice ]] 00:01:04.607 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.607 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.607 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:04.607 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.607 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.607 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.607 + true 00:01:04.607 + for D in $DRIVERS 00:01:04.607 + sudo modprobe ice 00:01:04.607 + exit 0 00:01:04.617 [Pipeline] } 00:01:04.635 [Pipeline] // withEnv 00:01:04.640 [Pipeline] } 00:01:04.656 [Pipeline] // stage 00:01:04.666 [Pipeline] catchError 00:01:04.668 [Pipeline] { 00:01:04.683 [Pipeline] timeout 00:01:04.683 Timeout set to expire in 50 min 00:01:04.685 [Pipeline] { 00:01:04.702 [Pipeline] stage 00:01:04.704 [Pipeline] { (Tests) 00:01:04.720 [Pipeline] sh 00:01:05.011 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.011 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.011 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.011 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.011 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.011 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.011 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.011 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.011 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.011 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.011 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.011 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.011 + source /etc/os-release 00:01:05.011 ++ NAME='Fedora Linux' 00:01:05.011 ++ VERSION='38 (Cloud Edition)' 00:01:05.011 ++ ID=fedora 00:01:05.011 ++ VERSION_ID=38 00:01:05.011 ++ VERSION_CODENAME= 00:01:05.011 ++ PLATFORM_ID=platform:f38 00:01:05.011 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.011 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.011 ++ LOGO=fedora-logo-icon 00:01:05.011 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.011 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.011 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.011 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.011 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.011 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.011 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.011 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.011 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.011 ++ SUPPORT_END=2024-05-14 00:01:05.011 ++ VARIANT='Cloud Edition' 00:01:05.011 ++ VARIANT_ID=cloud 00:01:05.011 + uname -a 00:01:05.011 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.011 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.391 Hugepages 00:01:06.391 node hugesize free / total 00:01:06.391 node0 1048576kB 0 / 0 00:01:06.391 node0 2048kB 0 / 0 00:01:06.391 node1 1048576kB 0 / 0 00:01:06.391 node1 2048kB 0 / 0 00:01:06.391 00:01:06.391 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.391 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:06.391 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:06.391 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:06.391 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:06.391 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:06.391 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:06.392 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:06.392 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:06.392 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:06.392 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:06.392 + rm -f /tmp/spdk-ld-path 00:01:06.392 + source autorun-spdk.conf 00:01:06.392 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.392 ++ SPDK_TEST_NVMF=1 00:01:06.392 ++ SPDK_TEST_NVME_CLI=1 00:01:06.392 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.392 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.392 ++ SPDK_TEST_VFIOUSER=1 00:01:06.392 ++ SPDK_RUN_UBSAN=1 00:01:06.392 ++ NET_TYPE=phy 00:01:06.392 ++ RUN_NIGHTLY=0 00:01:06.392 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.392 + [[ -n '' ]] 00:01:06.392 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.392 + for M in /var/spdk/build-*-manifest.txt 00:01:06.392 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.392 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.392 + for M in /var/spdk/build-*-manifest.txt 00:01:06.392 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.392 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.392 ++ uname 00:01:06.392 + [[ Linux == \L\i\n\u\x ]] 00:01:06.392 + sudo dmesg -T 00:01:06.392 + sudo dmesg --clear 00:01:06.392 + dmesg_pid=705797 00:01:06.392 + [[ Fedora Linux == FreeBSD ]] 00:01:06.392 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.392 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.392 + sudo dmesg -Tw 00:01:06.392 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.392 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.392 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.392 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.392 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.392 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.392 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.392 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.392 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.392 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.392 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.392 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.392 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.392 Test configuration: 00:01:06.392 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.392 SPDK_TEST_NVMF=1 00:01:06.392 SPDK_TEST_NVME_CLI=1 00:01:06.392 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.392 SPDK_TEST_NVMF_NICS=e810 00:01:06.392 SPDK_TEST_VFIOUSER=1 00:01:06.392 SPDK_RUN_UBSAN=1 00:01:06.392 NET_TYPE=phy 00:01:06.392 RUN_NIGHTLY=0 14:02:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.392 14:02:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.392 14:02:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.392 14:02:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.392 14:02:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.392 14:02:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.392 14:02:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.392 14:02:35 -- paths/export.sh@5 -- $ export PATH 00:01:06.392 14:02:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.392 14:02:35 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.392 14:02:35 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:06.392 14:02:35 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721908955.XXXXXX 00:01:06.392 14:02:35 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721908955.Lakv7h 00:01:06.392 14:02:35 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:06.392 14:02:35 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:06.392 14:02:35 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.392 14:02:35 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.392 14:02:35 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.392 14:02:35 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:06.392 14:02:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:06.392 14:02:35 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.392 14:02:35 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:06.392 14:02:35 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:06.392 14:02:35 -- pm/common@17 -- $ local monitor 00:01:06.392 14:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.392 14:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.392 14:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.392 14:02:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:06.392 14:02:35 -- pm/common@21 -- $ date +%s 00:01:06.392 14:02:35 -- pm/common@21 -- $ date +%s 00:01:06.392 14:02:35 -- pm/common@25 -- $ sleep 1 00:01:06.392 14:02:35 -- pm/common@21 -- $ date +%s 00:01:06.392 14:02:35 -- pm/common@21 -- $ date +%s 00:01:06.392 14:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721908955 00:01:06.392 14:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721908955 00:01:06.392 14:02:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721908955 00:01:06.392 14:02:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721908955 00:01:06.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721908955_collect-vmstat.pm.log 00:01:06.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721908955_collect-cpu-load.pm.log 00:01:06.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721908955_collect-cpu-temp.pm.log 00:01:06.392 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721908955_collect-bmc-pm.bmc.pm.log 00:01:07.332 14:02:36 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:07.332 14:02:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:07.332 14:02:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:07.332 14:02:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.332 14:02:36 -- spdk/autobuild.sh@16 -- $ date -u 00:01:07.332 Thu Jul 25 12:02:36 PM UTC 2024 00:01:07.332 14:02:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:07.332 v24.09-pre-305-gd3d267b54 00:01:07.332 14:02:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:07.332 14:02:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:07.332 14:02:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:07.332 14:02:36 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:07.332 14:02:36 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:07.332 14:02:36 -- common/autotest_common.sh@10 -- $ set +x 00:01:07.333 ************************************ 00:01:07.333 START TEST ubsan 00:01:07.333 ************************************ 00:01:07.333 14:02:36 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:07.333 using ubsan 00:01:07.333 00:01:07.333 real 0m0.000s 00:01:07.333 user 0m0.000s 00:01:07.333 sys 0m0.000s 00:01:07.333 14:02:36 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:07.333 14:02:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:07.333 ************************************ 00:01:07.333 END TEST ubsan 00:01:07.333 ************************************ 00:01:07.333 14:02:36 -- common/autotest_common.sh@1142 -- $ return 0 00:01:07.333 14:02:36 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:07.333 14:02:36 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:07.333 14:02:36 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:07.333 14:02:36 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:07.591 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:07.591 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:07.849 Using 'verbs' RDMA provider 00:01:18.400 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:28.382 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:28.382 Creating mk/config.mk...done. 00:01:28.382 Creating mk/cc.flags.mk...done. 00:01:28.382 Type 'make' to build. 00:01:28.382 14:02:57 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:28.382 14:02:57 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:28.382 14:02:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:28.382 14:02:57 -- common/autotest_common.sh@10 -- $ set +x 00:01:28.382 ************************************ 00:01:28.382 START TEST make 00:01:28.382 ************************************ 00:01:28.382 14:02:57 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:28.382 make[1]: Nothing to be done for 'all'. 00:01:30.304 The Meson build system 00:01:30.304 Version: 1.3.1 00:01:30.304 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:30.304 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:30.304 Build type: native build 00:01:30.304 Project name: libvfio-user 00:01:30.304 Project version: 0.0.1 00:01:30.304 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:30.304 C linker for the host machine: cc ld.bfd 2.39-16 00:01:30.304 Host machine cpu family: x86_64 00:01:30.304 Host machine cpu: x86_64 00:01:30.304 Run-time dependency threads found: YES 00:01:30.304 Library dl found: YES 00:01:30.304 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:30.304 Run-time dependency json-c found: YES 0.17 00:01:30.304 Run-time dependency cmocka found: YES 1.1.7 00:01:30.304 Program pytest-3 found: NO 00:01:30.304 Program flake8 found: NO 00:01:30.304 Program misspell-fixer found: NO 00:01:30.304 Program restructuredtext-lint found: NO 00:01:30.304 Program valgrind found: YES (/usr/bin/valgrind) 00:01:30.304 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.304 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.304 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.304 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.304 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:30.304 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:30.304 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:30.304 Build targets in project: 8 00:01:30.304 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:30.304 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:30.304 00:01:30.304 libvfio-user 0.0.1 00:01:30.304 00:01:30.304 User defined options 00:01:30.304 buildtype : debug 00:01:30.304 default_library: shared 00:01:30.304 libdir : /usr/local/lib 00:01:30.304 00:01:30.304 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.916 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:30.916 [1/37] Compiling C object samples/null.p/null.c.o 00:01:30.916 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:30.916 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:30.916 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:30.916 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:31.179 [6/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:31.179 [7/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:31.179 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:31.179 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:31.179 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:31.179 [11/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:31.179 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:31.179 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:31.179 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:31.179 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:31.179 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:31.179 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:31.179 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:31.179 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:31.179 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:31.179 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:31.179 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:31.179 [23/37] Compiling C object samples/server.p/server.c.o 00:01:31.179 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:31.179 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:31.179 [26/37] Compiling C object samples/client.p/client.c.o 00:01:31.179 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:31.443 [28/37] Linking target samples/client 00:01:31.443 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:31.443 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:31.443 [31/37] Linking target test/unit_tests 00:01:31.702 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:31.702 [33/37] Linking target samples/null 00:01:31.702 [34/37] Linking target samples/server 00:01:31.702 [35/37] Linking target samples/gpio-pci-idio-16 00:01:31.702 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:31.702 [37/37] Linking target samples/lspci 00:01:31.702 INFO: autodetecting backend as ninja 00:01:31.702 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:31.702 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:32.279 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:32.279 ninja: no work to do. 00:01:37.570 The Meson build system 00:01:37.570 Version: 1.3.1 00:01:37.570 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:37.570 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:37.570 Build type: native build 00:01:37.570 Program cat found: YES (/usr/bin/cat) 00:01:37.570 Project name: DPDK 00:01:37.570 Project version: 24.03.0 00:01:37.570 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:37.570 C linker for the host machine: cc ld.bfd 2.39-16 00:01:37.570 Host machine cpu family: x86_64 00:01:37.570 Host machine cpu: x86_64 00:01:37.570 Message: ## Building in Developer Mode ## 00:01:37.570 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:37.570 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:37.570 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:37.570 Program python3 found: YES (/usr/bin/python3) 00:01:37.570 Program cat found: YES (/usr/bin/cat) 00:01:37.570 Compiler for C supports arguments -march=native: YES 00:01:37.570 Checking for size of "void *" : 8 00:01:37.570 Checking for size of "void *" : 8 (cached) 00:01:37.570 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:37.570 Library m found: YES 00:01:37.570 Library numa found: YES 00:01:37.570 Has header "numaif.h" : YES 00:01:37.570 Library fdt found: NO 00:01:37.570 Library execinfo found: NO 00:01:37.570 Has header "execinfo.h" : YES 00:01:37.570 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:37.570 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:37.570 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:37.570 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:37.570 Run-time dependency openssl found: YES 3.0.9 00:01:37.570 Run-time dependency libpcap found: YES 1.10.4 00:01:37.570 Has header "pcap.h" with dependency libpcap: YES 00:01:37.570 Compiler for C supports arguments -Wcast-qual: YES 00:01:37.570 Compiler for C supports arguments -Wdeprecated: YES 00:01:37.570 Compiler for C supports arguments -Wformat: YES 00:01:37.570 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:37.570 Compiler for C supports arguments -Wformat-security: NO 00:01:37.570 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:37.570 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:37.570 Compiler for C supports arguments -Wnested-externs: YES 00:01:37.570 Compiler for C supports arguments -Wold-style-definition: YES 00:01:37.570 Compiler for C supports arguments -Wpointer-arith: YES 00:01:37.570 Compiler for C supports arguments -Wsign-compare: YES 00:01:37.570 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:37.570 Compiler for C supports arguments -Wundef: YES 00:01:37.570 Compiler for C supports arguments -Wwrite-strings: YES 00:01:37.570 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:37.570 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:37.570 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:37.570 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:37.570 Program objdump found: YES (/usr/bin/objdump) 00:01:37.570 Compiler for C supports arguments -mavx512f: YES 00:01:37.570 Checking if "AVX512 checking" compiles: YES 00:01:37.570 Fetching value of define "__SSE4_2__" : 1 00:01:37.570 Fetching value of define "__AES__" : 1 00:01:37.570 Fetching value of define "__AVX__" : 1 00:01:37.570 Fetching value of define "__AVX2__" : (undefined) 00:01:37.570 Fetching value of define "__AVX512BW__" : (undefined) 00:01:37.570 Fetching value of define "__AVX512CD__" : (undefined) 00:01:37.570 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:37.570 Fetching value of define "__AVX512F__" : (undefined) 00:01:37.570 Fetching value of define "__AVX512VL__" : (undefined) 00:01:37.570 Fetching value of define "__PCLMUL__" : 1 00:01:37.570 Fetching value of define "__RDRND__" : 1 00:01:37.570 Fetching value of define "__RDSEED__" : (undefined) 00:01:37.570 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:37.570 Fetching value of define "__znver1__" : (undefined) 00:01:37.570 Fetching value of define "__znver2__" : (undefined) 00:01:37.570 Fetching value of define "__znver3__" : (undefined) 00:01:37.570 Fetching value of define "__znver4__" : (undefined) 00:01:37.570 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:37.570 Message: lib/log: Defining dependency "log" 00:01:37.570 Message: lib/kvargs: Defining dependency "kvargs" 00:01:37.570 Message: lib/telemetry: Defining dependency "telemetry" 00:01:37.570 Checking for function "getentropy" : NO 00:01:37.570 Message: lib/eal: Defining dependency "eal" 00:01:37.570 Message: lib/ring: Defining dependency "ring" 00:01:37.570 Message: lib/rcu: Defining dependency "rcu" 00:01:37.570 Message: lib/mempool: Defining dependency "mempool" 00:01:37.570 Message: lib/mbuf: Defining dependency "mbuf" 00:01:37.570 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:37.570 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:37.570 Compiler for C supports arguments -mpclmul: YES 00:01:37.570 Compiler for C supports arguments -maes: YES 00:01:37.570 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:37.570 Compiler for C supports arguments -mavx512bw: YES 00:01:37.570 Compiler for C supports arguments -mavx512dq: YES 00:01:37.570 Compiler for C supports arguments -mavx512vl: YES 00:01:37.570 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:37.570 Compiler for C supports arguments -mavx2: YES 00:01:37.570 Compiler for C supports arguments -mavx: YES 00:01:37.570 Message: lib/net: Defining dependency "net" 00:01:37.570 Message: lib/meter: Defining dependency "meter" 00:01:37.570 Message: lib/ethdev: Defining dependency "ethdev" 00:01:37.570 Message: lib/pci: Defining dependency "pci" 00:01:37.570 Message: lib/cmdline: Defining dependency "cmdline" 00:01:37.570 Message: lib/hash: Defining dependency "hash" 00:01:37.570 Message: lib/timer: Defining dependency "timer" 00:01:37.570 Message: lib/compressdev: Defining dependency "compressdev" 00:01:37.570 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:37.570 Message: lib/dmadev: Defining dependency "dmadev" 00:01:37.570 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:37.570 Message: lib/power: Defining dependency "power" 00:01:37.570 Message: lib/reorder: Defining dependency "reorder" 00:01:37.570 Message: lib/security: Defining dependency "security" 00:01:37.570 Has header "linux/userfaultfd.h" : YES 00:01:37.570 Has header "linux/vduse.h" : YES 00:01:37.570 Message: lib/vhost: Defining dependency "vhost" 00:01:37.570 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:37.570 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:37.570 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:37.570 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:37.570 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:37.570 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:37.570 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:37.570 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:37.570 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:37.570 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:37.570 Program doxygen found: YES (/usr/bin/doxygen) 00:01:37.570 Configuring doxy-api-html.conf using configuration 00:01:37.570 Configuring doxy-api-man.conf using configuration 00:01:37.570 Program mandb found: YES (/usr/bin/mandb) 00:01:37.570 Program sphinx-build found: NO 00:01:37.570 Configuring rte_build_config.h using configuration 00:01:37.570 Message: 00:01:37.570 ================= 00:01:37.570 Applications Enabled 00:01:37.570 ================= 00:01:37.570 00:01:37.570 apps: 00:01:37.570 00:01:37.570 00:01:37.570 Message: 00:01:37.570 ================= 00:01:37.570 Libraries Enabled 00:01:37.570 ================= 00:01:37.570 00:01:37.570 libs: 00:01:37.570 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:37.570 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:37.570 cryptodev, dmadev, power, reorder, security, vhost, 00:01:37.570 00:01:37.570 Message: 00:01:37.570 =============== 00:01:37.570 Drivers Enabled 00:01:37.570 =============== 00:01:37.570 00:01:37.570 common: 00:01:37.570 00:01:37.570 bus: 00:01:37.570 pci, vdev, 00:01:37.570 mempool: 00:01:37.570 ring, 00:01:37.570 dma: 00:01:37.571 00:01:37.571 net: 00:01:37.571 00:01:37.571 crypto: 00:01:37.571 00:01:37.571 compress: 00:01:37.571 00:01:37.571 vdpa: 00:01:37.571 00:01:37.571 00:01:37.571 Message: 00:01:37.571 ================= 00:01:37.571 Content Skipped 00:01:37.571 ================= 00:01:37.571 00:01:37.571 apps: 00:01:37.571 dumpcap: explicitly disabled via build config 00:01:37.571 graph: explicitly disabled via build config 00:01:37.571 pdump: explicitly disabled via build config 00:01:37.571 proc-info: explicitly disabled via build config 00:01:37.571 test-acl: explicitly disabled via build config 00:01:37.571 test-bbdev: explicitly disabled via build config 00:01:37.571 test-cmdline: explicitly disabled via build config 00:01:37.571 test-compress-perf: explicitly disabled via build config 00:01:37.571 test-crypto-perf: explicitly disabled via build config 00:01:37.571 test-dma-perf: explicitly disabled via build config 00:01:37.571 test-eventdev: explicitly disabled via build config 00:01:37.571 test-fib: explicitly disabled via build config 00:01:37.571 test-flow-perf: explicitly disabled via build config 00:01:37.571 test-gpudev: explicitly disabled via build config 00:01:37.571 test-mldev: explicitly disabled via build config 00:01:37.571 test-pipeline: explicitly disabled via build config 00:01:37.571 test-pmd: explicitly disabled via build config 00:01:37.571 test-regex: explicitly disabled via build config 00:01:37.571 test-sad: explicitly disabled via build config 00:01:37.571 test-security-perf: explicitly disabled via build config 00:01:37.571 00:01:37.571 libs: 00:01:37.571 argparse: explicitly disabled via build config 00:01:37.571 metrics: explicitly disabled via build config 00:01:37.571 acl: explicitly disabled via build config 00:01:37.571 bbdev: explicitly disabled via build config 00:01:37.571 bitratestats: explicitly disabled via build config 00:01:37.571 bpf: explicitly disabled via build config 00:01:37.571 cfgfile: explicitly disabled via build config 00:01:37.571 distributor: explicitly disabled via build config 00:01:37.571 efd: explicitly disabled via build config 00:01:37.571 eventdev: explicitly disabled via build config 00:01:37.571 dispatcher: explicitly disabled via build config 00:01:37.571 gpudev: explicitly disabled via build config 00:01:37.571 gro: explicitly disabled via build config 00:01:37.571 gso: explicitly disabled via build config 00:01:37.571 ip_frag: explicitly disabled via build config 00:01:37.571 jobstats: explicitly disabled via build config 00:01:37.571 latencystats: explicitly disabled via build config 00:01:37.571 lpm: explicitly disabled via build config 00:01:37.571 member: explicitly disabled via build config 00:01:37.571 pcapng: explicitly disabled via build config 00:01:37.571 rawdev: explicitly disabled via build config 00:01:37.571 regexdev: explicitly disabled via build config 00:01:37.571 mldev: explicitly disabled via build config 00:01:37.571 rib: explicitly disabled via build config 00:01:37.571 sched: explicitly disabled via build config 00:01:37.571 stack: explicitly disabled via build config 00:01:37.571 ipsec: explicitly disabled via build config 00:01:37.571 pdcp: explicitly disabled via build config 00:01:37.571 fib: explicitly disabled via build config 00:01:37.571 port: explicitly disabled via build config 00:01:37.571 pdump: explicitly disabled via build config 00:01:37.571 table: explicitly disabled via build config 00:01:37.571 pipeline: explicitly disabled via build config 00:01:37.571 graph: explicitly disabled via build config 00:01:37.571 node: explicitly disabled via build config 00:01:37.571 00:01:37.571 drivers: 00:01:37.571 common/cpt: not in enabled drivers build config 00:01:37.571 common/dpaax: not in enabled drivers build config 00:01:37.571 common/iavf: not in enabled drivers build config 00:01:37.571 common/idpf: not in enabled drivers build config 00:01:37.571 common/ionic: not in enabled drivers build config 00:01:37.571 common/mvep: not in enabled drivers build config 00:01:37.571 common/octeontx: not in enabled drivers build config 00:01:37.571 bus/auxiliary: not in enabled drivers build config 00:01:37.571 bus/cdx: not in enabled drivers build config 00:01:37.571 bus/dpaa: not in enabled drivers build config 00:01:37.571 bus/fslmc: not in enabled drivers build config 00:01:37.571 bus/ifpga: not in enabled drivers build config 00:01:37.571 bus/platform: not in enabled drivers build config 00:01:37.571 bus/uacce: not in enabled drivers build config 00:01:37.571 bus/vmbus: not in enabled drivers build config 00:01:37.571 common/cnxk: not in enabled drivers build config 00:01:37.571 common/mlx5: not in enabled drivers build config 00:01:37.571 common/nfp: not in enabled drivers build config 00:01:37.571 common/nitrox: not in enabled drivers build config 00:01:37.571 common/qat: not in enabled drivers build config 00:01:37.571 common/sfc_efx: not in enabled drivers build config 00:01:37.571 mempool/bucket: not in enabled drivers build config 00:01:37.571 mempool/cnxk: not in enabled drivers build config 00:01:37.571 mempool/dpaa: not in enabled drivers build config 00:01:37.571 mempool/dpaa2: not in enabled drivers build config 00:01:37.571 mempool/octeontx: not in enabled drivers build config 00:01:37.571 mempool/stack: not in enabled drivers build config 00:01:37.571 dma/cnxk: not in enabled drivers build config 00:01:37.571 dma/dpaa: not in enabled drivers build config 00:01:37.571 dma/dpaa2: not in enabled drivers build config 00:01:37.571 dma/hisilicon: not in enabled drivers build config 00:01:37.571 dma/idxd: not in enabled drivers build config 00:01:37.571 dma/ioat: not in enabled drivers build config 00:01:37.571 dma/skeleton: not in enabled drivers build config 00:01:37.571 net/af_packet: not in enabled drivers build config 00:01:37.571 net/af_xdp: not in enabled drivers build config 00:01:37.571 net/ark: not in enabled drivers build config 00:01:37.571 net/atlantic: not in enabled drivers build config 00:01:37.571 net/avp: not in enabled drivers build config 00:01:37.571 net/axgbe: not in enabled drivers build config 00:01:37.571 net/bnx2x: not in enabled drivers build config 00:01:37.571 net/bnxt: not in enabled drivers build config 00:01:37.571 net/bonding: not in enabled drivers build config 00:01:37.571 net/cnxk: not in enabled drivers build config 00:01:37.571 net/cpfl: not in enabled drivers build config 00:01:37.571 net/cxgbe: not in enabled drivers build config 00:01:37.571 net/dpaa: not in enabled drivers build config 00:01:37.571 net/dpaa2: not in enabled drivers build config 00:01:37.571 net/e1000: not in enabled drivers build config 00:01:37.571 net/ena: not in enabled drivers build config 00:01:37.571 net/enetc: not in enabled drivers build config 00:01:37.571 net/enetfec: not in enabled drivers build config 00:01:37.571 net/enic: not in enabled drivers build config 00:01:37.571 net/failsafe: not in enabled drivers build config 00:01:37.571 net/fm10k: not in enabled drivers build config 00:01:37.571 net/gve: not in enabled drivers build config 00:01:37.571 net/hinic: not in enabled drivers build config 00:01:37.571 net/hns3: not in enabled drivers build config 00:01:37.571 net/i40e: not in enabled drivers build config 00:01:37.571 net/iavf: not in enabled drivers build config 00:01:37.571 net/ice: not in enabled drivers build config 00:01:37.571 net/idpf: not in enabled drivers build config 00:01:37.571 net/igc: not in enabled drivers build config 00:01:37.571 net/ionic: not in enabled drivers build config 00:01:37.571 net/ipn3ke: not in enabled drivers build config 00:01:37.571 net/ixgbe: not in enabled drivers build config 00:01:37.571 net/mana: not in enabled drivers build config 00:01:37.571 net/memif: not in enabled drivers build config 00:01:37.571 net/mlx4: not in enabled drivers build config 00:01:37.571 net/mlx5: not in enabled drivers build config 00:01:37.571 net/mvneta: not in enabled drivers build config 00:01:37.571 net/mvpp2: not in enabled drivers build config 00:01:37.571 net/netvsc: not in enabled drivers build config 00:01:37.571 net/nfb: not in enabled drivers build config 00:01:37.571 net/nfp: not in enabled drivers build config 00:01:37.571 net/ngbe: not in enabled drivers build config 00:01:37.571 net/null: not in enabled drivers build config 00:01:37.571 net/octeontx: not in enabled drivers build config 00:01:37.571 net/octeon_ep: not in enabled drivers build config 00:01:37.571 net/pcap: not in enabled drivers build config 00:01:37.571 net/pfe: not in enabled drivers build config 00:01:37.571 net/qede: not in enabled drivers build config 00:01:37.571 net/ring: not in enabled drivers build config 00:01:37.571 net/sfc: not in enabled drivers build config 00:01:37.571 net/softnic: not in enabled drivers build config 00:01:37.571 net/tap: not in enabled drivers build config 00:01:37.571 net/thunderx: not in enabled drivers build config 00:01:37.571 net/txgbe: not in enabled drivers build config 00:01:37.571 net/vdev_netvsc: not in enabled drivers build config 00:01:37.571 net/vhost: not in enabled drivers build config 00:01:37.571 net/virtio: not in enabled drivers build config 00:01:37.571 net/vmxnet3: not in enabled drivers build config 00:01:37.571 raw/*: missing internal dependency, "rawdev" 00:01:37.571 crypto/armv8: not in enabled drivers build config 00:01:37.571 crypto/bcmfs: not in enabled drivers build config 00:01:37.571 crypto/caam_jr: not in enabled drivers build config 00:01:37.571 crypto/ccp: not in enabled drivers build config 00:01:37.571 crypto/cnxk: not in enabled drivers build config 00:01:37.571 crypto/dpaa_sec: not in enabled drivers build config 00:01:37.571 crypto/dpaa2_sec: not in enabled drivers build config 00:01:37.571 crypto/ipsec_mb: not in enabled drivers build config 00:01:37.572 crypto/mlx5: not in enabled drivers build config 00:01:37.572 crypto/mvsam: not in enabled drivers build config 00:01:37.572 crypto/nitrox: not in enabled drivers build config 00:01:37.572 crypto/null: not in enabled drivers build config 00:01:37.572 crypto/octeontx: not in enabled drivers build config 00:01:37.572 crypto/openssl: not in enabled drivers build config 00:01:37.572 crypto/scheduler: not in enabled drivers build config 00:01:37.572 crypto/uadk: not in enabled drivers build config 00:01:37.572 crypto/virtio: not in enabled drivers build config 00:01:37.572 compress/isal: not in enabled drivers build config 00:01:37.572 compress/mlx5: not in enabled drivers build config 00:01:37.572 compress/nitrox: not in enabled drivers build config 00:01:37.572 compress/octeontx: not in enabled drivers build config 00:01:37.572 compress/zlib: not in enabled drivers build config 00:01:37.572 regex/*: missing internal dependency, "regexdev" 00:01:37.572 ml/*: missing internal dependency, "mldev" 00:01:37.572 vdpa/ifc: not in enabled drivers build config 00:01:37.572 vdpa/mlx5: not in enabled drivers build config 00:01:37.572 vdpa/nfp: not in enabled drivers build config 00:01:37.572 vdpa/sfc: not in enabled drivers build config 00:01:37.572 event/*: missing internal dependency, "eventdev" 00:01:37.572 baseband/*: missing internal dependency, "bbdev" 00:01:37.572 gpu/*: missing internal dependency, "gpudev" 00:01:37.572 00:01:37.572 00:01:37.572 Build targets in project: 85 00:01:37.572 00:01:37.572 DPDK 24.03.0 00:01:37.572 00:01:37.572 User defined options 00:01:37.572 buildtype : debug 00:01:37.572 default_library : shared 00:01:37.572 libdir : lib 00:01:37.572 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:37.572 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:37.572 c_link_args : 00:01:37.572 cpu_instruction_set: native 00:01:37.572 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:37.572 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:37.572 enable_docs : false 00:01:37.572 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:37.572 enable_kmods : false 00:01:37.572 max_lcores : 128 00:01:37.572 tests : false 00:01:37.572 00:01:37.572 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:37.833 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:37.833 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:37.833 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:37.833 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:37.833 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:37.833 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:37.833 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:37.833 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:37.833 [8/268] Linking static target lib/librte_kvargs.a 00:01:37.833 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:37.833 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:38.098 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:38.098 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:38.098 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:38.098 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:38.098 [15/268] Linking static target lib/librte_log.a 00:01:38.098 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:38.668 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.668 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:38.668 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:38.668 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:38.668 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:38.668 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:38.668 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:38.668 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:38.926 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:38.926 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:38.926 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:38.926 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:38.926 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:38.926 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:38.926 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:38.926 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:38.926 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:38.926 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:38.926 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:38.926 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:38.926 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:38.926 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:38.926 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:38.926 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:38.926 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:38.926 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:38.926 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:38.926 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:38.926 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:38.926 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:38.926 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:38.926 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:38.926 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:38.926 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:38.926 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:38.926 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:38.926 [53/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:38.926 [54/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:38.926 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:38.926 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:38.926 [57/268] Linking static target lib/librte_telemetry.a 00:01:38.926 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:39.191 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:39.191 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:39.191 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:39.191 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:39.191 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:39.191 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:39.191 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:39.191 [66/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.452 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:39.452 [68/268] Linking target lib/librte_log.so.24.1 00:01:39.452 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:39.452 [70/268] Linking static target lib/librte_pci.a 00:01:39.452 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:39.452 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:39.713 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:39.713 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:39.713 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:39.713 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:39.713 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:39.713 [78/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:39.713 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:39.713 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:39.713 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:39.713 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:39.713 [83/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:39.713 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:39.713 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:39.713 [86/268] Linking target lib/librte_kvargs.so.24.1 00:01:39.713 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:39.713 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:39.713 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:39.713 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:39.713 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:39.713 [92/268] Linking static target lib/librte_ring.a 00:01:39.973 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:39.973 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:39.973 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:39.973 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:39.973 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:39.973 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:39.973 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:39.973 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:39.973 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:39.973 [102/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:39.973 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:39.973 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:39.973 [105/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:39.973 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:39.973 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:39.973 [108/268] Linking static target lib/librte_rcu.a 00:01:39.973 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.973 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:39.973 [111/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.973 [112/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.973 [113/268] Linking static target lib/librte_eal.a 00:01:39.973 [114/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:39.973 [115/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.973 [116/268] Linking static target lib/librte_meter.a 00:01:39.973 [117/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:40.237 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:40.237 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:40.237 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:40.237 [121/268] Linking target lib/librte_telemetry.so.24.1 00:01:40.237 [122/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:40.237 [123/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:40.237 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:40.237 [125/268] Linking static target lib/librte_mempool.a 00:01:40.237 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:40.237 [127/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:40.237 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:40.237 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:40.237 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:40.237 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:40.499 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:40.499 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:40.499 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:40.499 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:40.499 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.499 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:40.499 [138/268] Linking static target lib/librte_net.a 00:01:40.499 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:40.499 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:40.758 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:40.758 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:40.758 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:40.758 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:40.758 [145/268] Linking static target lib/librte_cmdline.a 00:01:40.758 [146/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.758 [147/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.758 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:40.758 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:40.758 [150/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.758 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:40.758 [152/268] Linking static target lib/librte_timer.a 00:01:41.020 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:41.020 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:41.020 [155/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:41.021 [156/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.021 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:41.021 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:41.021 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:41.021 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:41.021 [161/268] Linking static target lib/librte_dmadev.a 00:01:41.021 [162/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:41.021 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:41.280 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:41.280 [165/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:41.280 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:41.280 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.280 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:41.280 [169/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.280 [170/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:41.280 [171/268] Linking static target lib/librte_compressdev.a 00:01:41.280 [172/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.280 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.280 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.280 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:41.280 [176/268] Linking static target lib/librte_power.a 00:01:41.537 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:41.537 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:41.537 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.537 [180/268] Linking static target lib/librte_hash.a 00:01:41.537 [181/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:41.537 [182/268] Linking static target lib/librte_mbuf.a 00:01:41.537 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.537 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.537 [185/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:41.537 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.537 [187/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:41.537 [188/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:41.537 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:41.537 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.537 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:41.537 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.537 [193/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.537 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.795 [195/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.795 [196/268] Linking static target lib/librte_reorder.a 00:01:41.795 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:41.795 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:41.795 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:41.795 [200/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.795 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.795 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:41.795 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:41.795 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:41.795 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.795 [206/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.795 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.795 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:41.795 [209/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.795 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.054 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:42.054 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.054 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:42.054 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:42.054 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.054 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.054 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.054 [218/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:42.054 [219/268] Linking static target lib/librte_security.a 00:01:42.054 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:42.054 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:42.054 [222/268] Linking static target lib/librte_ethdev.a 00:01:42.312 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.312 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:42.312 [225/268] Linking static target lib/librte_cryptodev.a 00:01:42.312 [226/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.245 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.619 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:46.518 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.518 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.518 [231/268] Linking target lib/librte_eal.so.24.1 00:01:46.518 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:46.518 [233/268] Linking target lib/librte_pci.so.24.1 00:01:46.518 [234/268] Linking target lib/librte_meter.so.24.1 00:01:46.518 [235/268] Linking target lib/librte_ring.so.24.1 00:01:46.518 [236/268] Linking target lib/librte_timer.so.24.1 00:01:46.518 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:46.518 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:46.777 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:46.777 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:46.777 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:46.777 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:46.777 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:46.777 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:46.777 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:46.777 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:46.777 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:46.777 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:47.035 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:47.035 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:47.035 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:47.035 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:47.035 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:47.035 [254/268] Linking target lib/librte_net.so.24.1 00:01:47.035 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:47.293 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:47.293 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:47.293 [258/268] Linking target lib/librte_security.so.24.1 00:01:47.293 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:47.293 [260/268] Linking target lib/librte_hash.so.24.1 00:01:47.293 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:47.293 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:47.293 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:47.552 [264/268] Linking target lib/librte_power.so.24.1 00:01:50.131 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.131 [266/268] Linking static target lib/librte_vhost.a 00:01:51.068 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.068 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:51.068 INFO: autodetecting backend as ninja 00:01:51.068 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:52.003 CC lib/ut/ut.o 00:01:52.004 CC lib/ut_mock/mock.o 00:01:52.004 CC lib/log/log.o 00:01:52.004 CC lib/log/log_flags.o 00:01:52.004 CC lib/log/log_deprecated.o 00:01:52.004 LIB libspdk_ut.a 00:01:52.004 LIB libspdk_log.a 00:01:52.004 LIB libspdk_ut_mock.a 00:01:52.004 SO libspdk_ut.so.2.0 00:01:52.004 SO libspdk_ut_mock.so.6.0 00:01:52.004 SO libspdk_log.so.7.0 00:01:52.004 SYMLINK libspdk_ut.so 00:01:52.004 SYMLINK libspdk_ut_mock.so 00:01:52.262 SYMLINK libspdk_log.so 00:01:52.262 CC lib/dma/dma.o 00:01:52.262 CC lib/ioat/ioat.o 00:01:52.262 CXX lib/trace_parser/trace.o 00:01:52.262 CC lib/util/base64.o 00:01:52.262 CC lib/util/bit_array.o 00:01:52.262 CC lib/util/cpuset.o 00:01:52.262 CC lib/util/crc16.o 00:01:52.262 CC lib/util/crc32.o 00:01:52.262 CC lib/util/crc32c.o 00:01:52.262 CC lib/util/crc32_ieee.o 00:01:52.262 CC lib/util/crc64.o 00:01:52.262 CC lib/util/dif.o 00:01:52.262 CC lib/util/fd.o 00:01:52.262 CC lib/util/fd_group.o 00:01:52.262 CC lib/util/file.o 00:01:52.262 CC lib/util/hexlify.o 00:01:52.262 CC lib/util/iov.o 00:01:52.262 CC lib/util/math.o 00:01:52.262 CC lib/util/net.o 00:01:52.262 CC lib/util/pipe.o 00:01:52.262 CC lib/util/strerror_tls.o 00:01:52.262 CC lib/util/string.o 00:01:52.262 CC lib/util/uuid.o 00:01:52.262 CC lib/util/zipf.o 00:01:52.262 CC lib/util/xor.o 00:01:52.521 CC lib/vfio_user/host/vfio_user_pci.o 00:01:52.521 CC lib/vfio_user/host/vfio_user.o 00:01:52.521 LIB libspdk_dma.a 00:01:52.521 SO libspdk_dma.so.4.0 00:01:52.521 SYMLINK libspdk_dma.so 00:01:52.521 LIB libspdk_ioat.a 00:01:52.521 SO libspdk_ioat.so.7.0 00:01:52.780 SYMLINK libspdk_ioat.so 00:01:52.780 LIB libspdk_vfio_user.a 00:01:52.780 SO libspdk_vfio_user.so.5.0 00:01:52.780 SYMLINK libspdk_vfio_user.so 00:01:52.780 LIB libspdk_util.a 00:01:53.038 SO libspdk_util.so.10.0 00:01:53.038 SYMLINK libspdk_util.so 00:01:53.296 CC lib/vmd/vmd.o 00:01:53.296 CC lib/json/json_parse.o 00:01:53.296 CC lib/env_dpdk/env.o 00:01:53.296 CC lib/conf/conf.o 00:01:53.296 CC lib/rdma_provider/common.o 00:01:53.296 CC lib/idxd/idxd.o 00:01:53.296 CC lib/json/json_util.o 00:01:53.296 CC lib/rdma_utils/rdma_utils.o 00:01:53.296 CC lib/vmd/led.o 00:01:53.296 CC lib/env_dpdk/memory.o 00:01:53.296 CC lib/idxd/idxd_user.o 00:01:53.296 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:53.296 CC lib/json/json_write.o 00:01:53.296 CC lib/env_dpdk/pci.o 00:01:53.296 CC lib/idxd/idxd_kernel.o 00:01:53.296 CC lib/env_dpdk/init.o 00:01:53.296 CC lib/env_dpdk/threads.o 00:01:53.296 CC lib/env_dpdk/pci_ioat.o 00:01:53.296 CC lib/env_dpdk/pci_virtio.o 00:01:53.296 CC lib/env_dpdk/pci_vmd.o 00:01:53.296 CC lib/env_dpdk/pci_idxd.o 00:01:53.296 CC lib/env_dpdk/pci_event.o 00:01:53.296 CC lib/env_dpdk/sigbus_handler.o 00:01:53.296 CC lib/env_dpdk/pci_dpdk.o 00:01:53.296 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:53.296 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:53.296 LIB libspdk_trace_parser.a 00:01:53.296 SO libspdk_trace_parser.so.5.0 00:01:53.554 SYMLINK libspdk_trace_parser.so 00:01:53.555 LIB libspdk_rdma_provider.a 00:01:53.555 SO libspdk_rdma_provider.so.6.0 00:01:53.555 LIB libspdk_conf.a 00:01:53.555 SO libspdk_conf.so.6.0 00:01:53.555 SYMLINK libspdk_rdma_provider.so 00:01:53.555 LIB libspdk_rdma_utils.a 00:01:53.555 SYMLINK libspdk_conf.so 00:01:53.555 SO libspdk_rdma_utils.so.1.0 00:01:53.555 LIB libspdk_json.a 00:01:53.555 SO libspdk_json.so.6.0 00:01:53.555 SYMLINK libspdk_rdma_utils.so 00:01:53.830 SYMLINK libspdk_json.so 00:01:53.830 LIB libspdk_idxd.a 00:01:53.830 CC lib/jsonrpc/jsonrpc_server.o 00:01:53.830 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:53.830 CC lib/jsonrpc/jsonrpc_client.o 00:01:53.830 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:53.830 SO libspdk_idxd.so.12.0 00:01:53.830 SYMLINK libspdk_idxd.so 00:01:54.093 LIB libspdk_vmd.a 00:01:54.093 SO libspdk_vmd.so.6.0 00:01:54.093 SYMLINK libspdk_vmd.so 00:01:54.093 LIB libspdk_jsonrpc.a 00:01:54.093 SO libspdk_jsonrpc.so.6.0 00:01:54.380 SYMLINK libspdk_jsonrpc.so 00:01:54.380 CC lib/rpc/rpc.o 00:01:54.638 LIB libspdk_rpc.a 00:01:54.638 SO libspdk_rpc.so.6.0 00:01:54.638 SYMLINK libspdk_rpc.so 00:01:54.896 CC lib/notify/notify.o 00:01:54.896 CC lib/trace/trace.o 00:01:54.896 CC lib/notify/notify_rpc.o 00:01:54.896 CC lib/trace/trace_flags.o 00:01:54.896 CC lib/trace/trace_rpc.o 00:01:54.896 CC lib/keyring/keyring.o 00:01:54.896 CC lib/keyring/keyring_rpc.o 00:01:54.896 LIB libspdk_notify.a 00:01:55.155 SO libspdk_notify.so.6.0 00:01:55.155 LIB libspdk_keyring.a 00:01:55.155 SYMLINK libspdk_notify.so 00:01:55.155 LIB libspdk_trace.a 00:01:55.155 SO libspdk_keyring.so.1.0 00:01:55.155 SO libspdk_trace.so.10.0 00:01:55.155 SYMLINK libspdk_keyring.so 00:01:55.155 SYMLINK libspdk_trace.so 00:01:55.413 LIB libspdk_env_dpdk.a 00:01:55.413 CC lib/thread/thread.o 00:01:55.413 CC lib/thread/iobuf.o 00:01:55.413 CC lib/sock/sock.o 00:01:55.413 CC lib/sock/sock_rpc.o 00:01:55.413 SO libspdk_env_dpdk.so.15.0 00:01:55.413 SYMLINK libspdk_env_dpdk.so 00:01:55.671 LIB libspdk_sock.a 00:01:55.671 SO libspdk_sock.so.10.0 00:01:55.929 SYMLINK libspdk_sock.so 00:01:55.929 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:55.929 CC lib/nvme/nvme_ctrlr.o 00:01:55.929 CC lib/nvme/nvme_fabric.o 00:01:55.929 CC lib/nvme/nvme_ns_cmd.o 00:01:55.929 CC lib/nvme/nvme_ns.o 00:01:55.929 CC lib/nvme/nvme_pcie_common.o 00:01:55.929 CC lib/nvme/nvme_pcie.o 00:01:55.929 CC lib/nvme/nvme_qpair.o 00:01:55.929 CC lib/nvme/nvme.o 00:01:55.929 CC lib/nvme/nvme_quirks.o 00:01:55.929 CC lib/nvme/nvme_transport.o 00:01:55.929 CC lib/nvme/nvme_discovery.o 00:01:55.929 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:55.929 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:55.929 CC lib/nvme/nvme_tcp.o 00:01:55.929 CC lib/nvme/nvme_opal.o 00:01:55.929 CC lib/nvme/nvme_io_msg.o 00:01:55.929 CC lib/nvme/nvme_poll_group.o 00:01:55.929 CC lib/nvme/nvme_zns.o 00:01:55.929 CC lib/nvme/nvme_stubs.o 00:01:55.929 CC lib/nvme/nvme_auth.o 00:01:55.929 CC lib/nvme/nvme_cuse.o 00:01:55.929 CC lib/nvme/nvme_vfio_user.o 00:01:55.929 CC lib/nvme/nvme_rdma.o 00:01:56.864 LIB libspdk_thread.a 00:01:56.864 SO libspdk_thread.so.10.1 00:01:56.864 SYMLINK libspdk_thread.so 00:01:57.122 CC lib/vfu_tgt/tgt_endpoint.o 00:01:57.122 CC lib/virtio/virtio.o 00:01:57.122 CC lib/blob/request.o 00:01:57.122 CC lib/accel/accel.o 00:01:57.122 CC lib/blob/blobstore.o 00:01:57.122 CC lib/init/json_config.o 00:01:57.122 CC lib/accel/accel_rpc.o 00:01:57.122 CC lib/vfu_tgt/tgt_rpc.o 00:01:57.122 CC lib/blob/zeroes.o 00:01:57.122 CC lib/init/subsystem.o 00:01:57.122 CC lib/blob/blob_bs_dev.o 00:01:57.122 CC lib/accel/accel_sw.o 00:01:57.122 CC lib/virtio/virtio_vhost_user.o 00:01:57.122 CC lib/init/subsystem_rpc.o 00:01:57.122 CC lib/virtio/virtio_vfio_user.o 00:01:57.122 CC lib/init/rpc.o 00:01:57.122 CC lib/virtio/virtio_pci.o 00:01:57.380 LIB libspdk_init.a 00:01:57.380 SO libspdk_init.so.5.0 00:01:57.380 LIB libspdk_virtio.a 00:01:57.638 SYMLINK libspdk_init.so 00:01:57.638 SO libspdk_virtio.so.7.0 00:01:57.638 LIB libspdk_vfu_tgt.a 00:01:57.638 SO libspdk_vfu_tgt.so.3.0 00:01:57.638 SYMLINK libspdk_virtio.so 00:01:57.638 SYMLINK libspdk_vfu_tgt.so 00:01:57.638 CC lib/event/app.o 00:01:57.638 CC lib/event/reactor.o 00:01:57.638 CC lib/event/log_rpc.o 00:01:57.638 CC lib/event/app_rpc.o 00:01:57.638 CC lib/event/scheduler_static.o 00:01:58.205 LIB libspdk_event.a 00:01:58.205 SO libspdk_event.so.14.0 00:01:58.205 LIB libspdk_accel.a 00:01:58.205 SYMLINK libspdk_event.so 00:01:58.205 SO libspdk_accel.so.16.0 00:01:58.205 SYMLINK libspdk_accel.so 00:01:58.463 LIB libspdk_nvme.a 00:01:58.463 CC lib/bdev/bdev.o 00:01:58.463 CC lib/bdev/bdev_rpc.o 00:01:58.463 CC lib/bdev/bdev_zone.o 00:01:58.463 CC lib/bdev/part.o 00:01:58.463 CC lib/bdev/scsi_nvme.o 00:01:58.463 SO libspdk_nvme.so.13.1 00:01:58.722 SYMLINK libspdk_nvme.so 00:02:00.097 LIB libspdk_blob.a 00:02:00.097 SO libspdk_blob.so.11.0 00:02:00.355 SYMLINK libspdk_blob.so 00:02:00.355 CC lib/lvol/lvol.o 00:02:00.355 CC lib/blobfs/blobfs.o 00:02:00.355 CC lib/blobfs/tree.o 00:02:00.920 LIB libspdk_bdev.a 00:02:00.920 SO libspdk_bdev.so.16.0 00:02:01.184 SYMLINK libspdk_bdev.so 00:02:01.184 CC lib/ublk/ublk.o 00:02:01.184 CC lib/nbd/nbd.o 00:02:01.184 CC lib/ublk/ublk_rpc.o 00:02:01.184 CC lib/nbd/nbd_rpc.o 00:02:01.184 CC lib/ftl/ftl_core.o 00:02:01.184 CC lib/nvmf/ctrlr.o 00:02:01.184 CC lib/ftl/ftl_init.o 00:02:01.184 CC lib/ftl/ftl_layout.o 00:02:01.184 CC lib/nvmf/ctrlr_discovery.o 00:02:01.184 CC lib/scsi/dev.o 00:02:01.184 CC lib/ftl/ftl_debug.o 00:02:01.184 CC lib/nvmf/ctrlr_bdev.o 00:02:01.184 CC lib/scsi/lun.o 00:02:01.184 CC lib/nvmf/subsystem.o 00:02:01.184 CC lib/ftl/ftl_io.o 00:02:01.184 CC lib/scsi/port.o 00:02:01.184 LIB libspdk_blobfs.a 00:02:01.184 CC lib/nvmf/nvmf.o 00:02:01.184 CC lib/ftl/ftl_sb.o 00:02:01.184 CC lib/scsi/scsi.o 00:02:01.184 CC lib/nvmf/nvmf_rpc.o 00:02:01.184 CC lib/ftl/ftl_l2p.o 00:02:01.184 CC lib/scsi/scsi_bdev.o 00:02:01.184 CC lib/nvmf/transport.o 00:02:01.184 CC lib/ftl/ftl_l2p_flat.o 00:02:01.184 CC lib/nvmf/tcp.o 00:02:01.184 CC lib/scsi/scsi_pr.o 00:02:01.184 CC lib/nvmf/stubs.o 00:02:01.184 CC lib/nvmf/mdns_server.o 00:02:01.184 CC lib/scsi/task.o 00:02:01.184 CC lib/scsi/scsi_rpc.o 00:02:01.184 CC lib/ftl/ftl_nv_cache.o 00:02:01.184 CC lib/nvmf/vfio_user.o 00:02:01.184 CC lib/ftl/ftl_band.o 00:02:01.184 CC lib/nvmf/rdma.o 00:02:01.184 CC lib/ftl/ftl_band_ops.o 00:02:01.184 CC lib/ftl/ftl_writer.o 00:02:01.184 CC lib/nvmf/auth.o 00:02:01.184 CC lib/ftl/ftl_rq.o 00:02:01.184 CC lib/ftl/ftl_reloc.o 00:02:01.184 CC lib/ftl/ftl_l2p_cache.o 00:02:01.184 CC lib/ftl/ftl_p2l.o 00:02:01.184 CC lib/ftl/mngt/ftl_mngt.o 00:02:01.184 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:01.184 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:01.184 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:01.184 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:01.445 SO libspdk_blobfs.so.10.0 00:02:01.445 SYMLINK libspdk_blobfs.so 00:02:01.445 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:01.445 LIB libspdk_lvol.a 00:02:01.707 SO libspdk_lvol.so.10.0 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:01.707 SYMLINK libspdk_lvol.so 00:02:01.707 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:01.707 CC lib/ftl/utils/ftl_conf.o 00:02:01.707 CC lib/ftl/utils/ftl_md.o 00:02:01.707 CC lib/ftl/utils/ftl_mempool.o 00:02:01.707 CC lib/ftl/utils/ftl_bitmap.o 00:02:01.707 CC lib/ftl/utils/ftl_property.o 00:02:01.707 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:01.707 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:01.707 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:01.708 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:01.708 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:01.708 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:01.969 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:01.969 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:01.969 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:01.969 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:01.969 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:01.969 CC lib/ftl/base/ftl_base_dev.o 00:02:01.969 CC lib/ftl/base/ftl_base_bdev.o 00:02:01.969 CC lib/ftl/ftl_trace.o 00:02:01.969 LIB libspdk_nbd.a 00:02:02.227 SO libspdk_nbd.so.7.0 00:02:02.227 SYMLINK libspdk_nbd.so 00:02:02.227 LIB libspdk_scsi.a 00:02:02.227 SO libspdk_scsi.so.9.0 00:02:02.227 LIB libspdk_ublk.a 00:02:02.227 SO libspdk_ublk.so.3.0 00:02:02.485 SYMLINK libspdk_scsi.so 00:02:02.485 SYMLINK libspdk_ublk.so 00:02:02.485 CC lib/iscsi/conn.o 00:02:02.485 CC lib/vhost/vhost.o 00:02:02.485 CC lib/vhost/vhost_rpc.o 00:02:02.485 CC lib/iscsi/init_grp.o 00:02:02.485 CC lib/iscsi/iscsi.o 00:02:02.485 CC lib/vhost/vhost_scsi.o 00:02:02.485 CC lib/vhost/vhost_blk.o 00:02:02.485 CC lib/iscsi/md5.o 00:02:02.485 CC lib/vhost/rte_vhost_user.o 00:02:02.485 CC lib/iscsi/param.o 00:02:02.485 CC lib/iscsi/portal_grp.o 00:02:02.485 CC lib/iscsi/tgt_node.o 00:02:02.485 CC lib/iscsi/iscsi_subsystem.o 00:02:02.485 CC lib/iscsi/iscsi_rpc.o 00:02:02.485 CC lib/iscsi/task.o 00:02:02.744 LIB libspdk_ftl.a 00:02:02.744 SO libspdk_ftl.so.9.0 00:02:03.312 SYMLINK libspdk_ftl.so 00:02:03.908 LIB libspdk_vhost.a 00:02:03.908 SO libspdk_vhost.so.8.0 00:02:03.908 LIB libspdk_nvmf.a 00:02:03.908 SYMLINK libspdk_vhost.so 00:02:03.908 SO libspdk_nvmf.so.19.0 00:02:03.908 LIB libspdk_iscsi.a 00:02:03.908 SO libspdk_iscsi.so.8.0 00:02:04.173 SYMLINK libspdk_nvmf.so 00:02:04.173 SYMLINK libspdk_iscsi.so 00:02:04.432 CC module/vfu_device/vfu_virtio.o 00:02:04.432 CC module/vfu_device/vfu_virtio_blk.o 00:02:04.432 CC module/vfu_device/vfu_virtio_scsi.o 00:02:04.432 CC module/vfu_device/vfu_virtio_rpc.o 00:02:04.432 CC module/env_dpdk/env_dpdk_rpc.o 00:02:04.432 CC module/sock/posix/posix.o 00:02:04.432 CC module/blob/bdev/blob_bdev.o 00:02:04.432 CC module/accel/ioat/accel_ioat.o 00:02:04.432 CC module/accel/error/accel_error.o 00:02:04.432 CC module/accel/ioat/accel_ioat_rpc.o 00:02:04.432 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:04.432 CC module/accel/error/accel_error_rpc.o 00:02:04.432 CC module/accel/iaa/accel_iaa.o 00:02:04.432 CC module/accel/iaa/accel_iaa_rpc.o 00:02:04.432 CC module/keyring/file/keyring.o 00:02:04.432 CC module/accel/dsa/accel_dsa.o 00:02:04.432 CC module/scheduler/gscheduler/gscheduler.o 00:02:04.432 CC module/accel/dsa/accel_dsa_rpc.o 00:02:04.432 CC module/keyring/file/keyring_rpc.o 00:02:04.432 CC module/keyring/linux/keyring.o 00:02:04.432 CC module/keyring/linux/keyring_rpc.o 00:02:04.432 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:04.690 LIB libspdk_env_dpdk_rpc.a 00:02:04.690 SO libspdk_env_dpdk_rpc.so.6.0 00:02:04.690 SYMLINK libspdk_env_dpdk_rpc.so 00:02:04.690 LIB libspdk_keyring_linux.a 00:02:04.690 LIB libspdk_keyring_file.a 00:02:04.690 LIB libspdk_scheduler_gscheduler.a 00:02:04.690 LIB libspdk_scheduler_dpdk_governor.a 00:02:04.690 SO libspdk_keyring_linux.so.1.0 00:02:04.690 SO libspdk_keyring_file.so.1.0 00:02:04.690 SO libspdk_scheduler_gscheduler.so.4.0 00:02:04.690 LIB libspdk_accel_error.a 00:02:04.690 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:04.690 LIB libspdk_accel_ioat.a 00:02:04.690 LIB libspdk_scheduler_dynamic.a 00:02:04.690 LIB libspdk_accel_iaa.a 00:02:04.690 SO libspdk_accel_error.so.2.0 00:02:04.690 SO libspdk_scheduler_dynamic.so.4.0 00:02:04.690 SO libspdk_accel_ioat.so.6.0 00:02:04.690 SYMLINK libspdk_keyring_linux.so 00:02:04.690 SYMLINK libspdk_keyring_file.so 00:02:04.690 SYMLINK libspdk_scheduler_gscheduler.so 00:02:04.690 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:04.690 SO libspdk_accel_iaa.so.3.0 00:02:04.949 SYMLINK libspdk_accel_error.so 00:02:04.949 LIB libspdk_accel_dsa.a 00:02:04.949 SYMLINK libspdk_scheduler_dynamic.so 00:02:04.949 LIB libspdk_blob_bdev.a 00:02:04.949 SYMLINK libspdk_accel_ioat.so 00:02:04.949 SYMLINK libspdk_accel_iaa.so 00:02:04.949 SO libspdk_accel_dsa.so.5.0 00:02:04.949 SO libspdk_blob_bdev.so.11.0 00:02:04.949 SYMLINK libspdk_blob_bdev.so 00:02:04.949 SYMLINK libspdk_accel_dsa.so 00:02:05.208 LIB libspdk_vfu_device.a 00:02:05.208 SO libspdk_vfu_device.so.3.0 00:02:05.208 CC module/bdev/null/bdev_null.o 00:02:05.208 CC module/bdev/delay/vbdev_delay.o 00:02:05.208 CC module/bdev/error/vbdev_error_rpc.o 00:02:05.208 CC module/bdev/error/vbdev_error.o 00:02:05.208 CC module/bdev/nvme/bdev_nvme.o 00:02:05.208 CC module/blobfs/bdev/blobfs_bdev.o 00:02:05.208 CC module/bdev/gpt/gpt.o 00:02:05.208 CC module/bdev/lvol/vbdev_lvol.o 00:02:05.208 CC module/bdev/gpt/vbdev_gpt.o 00:02:05.208 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:05.208 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:05.208 CC module/bdev/null/bdev_null_rpc.o 00:02:05.208 CC module/bdev/aio/bdev_aio.o 00:02:05.208 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:05.208 CC module/bdev/passthru/vbdev_passthru.o 00:02:05.208 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:05.208 CC module/bdev/aio/bdev_aio_rpc.o 00:02:05.208 CC module/bdev/malloc/bdev_malloc.o 00:02:05.208 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:05.208 CC module/bdev/split/vbdev_split.o 00:02:05.208 CC module/bdev/nvme/nvme_rpc.o 00:02:05.208 CC module/bdev/nvme/bdev_mdns_client.o 00:02:05.208 CC module/bdev/split/vbdev_split_rpc.o 00:02:05.208 CC module/bdev/raid/bdev_raid.o 00:02:05.208 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:05.208 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:05.208 CC module/bdev/raid/bdev_raid_rpc.o 00:02:05.208 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:05.208 CC module/bdev/nvme/vbdev_opal.o 00:02:05.208 CC module/bdev/raid/bdev_raid_sb.o 00:02:05.208 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:05.208 CC module/bdev/raid/raid0.o 00:02:05.208 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:05.208 CC module/bdev/iscsi/bdev_iscsi.o 00:02:05.208 CC module/bdev/raid/raid1.o 00:02:05.208 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:05.208 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:05.208 CC module/bdev/raid/concat.o 00:02:05.208 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:05.208 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:05.208 CC module/bdev/ftl/bdev_ftl.o 00:02:05.208 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:05.208 SYMLINK libspdk_vfu_device.so 00:02:05.466 LIB libspdk_sock_posix.a 00:02:05.466 SO libspdk_sock_posix.so.6.0 00:02:05.466 SYMLINK libspdk_sock_posix.so 00:02:05.466 LIB libspdk_blobfs_bdev.a 00:02:05.466 SO libspdk_blobfs_bdev.so.6.0 00:02:05.725 SYMLINK libspdk_blobfs_bdev.so 00:02:05.725 LIB libspdk_bdev_split.a 00:02:05.725 LIB libspdk_bdev_null.a 00:02:05.725 LIB libspdk_bdev_ftl.a 00:02:05.725 SO libspdk_bdev_split.so.6.0 00:02:05.725 LIB libspdk_bdev_error.a 00:02:05.725 SO libspdk_bdev_null.so.6.0 00:02:05.725 SO libspdk_bdev_ftl.so.6.0 00:02:05.725 LIB libspdk_bdev_gpt.a 00:02:05.725 SO libspdk_bdev_error.so.6.0 00:02:05.725 LIB libspdk_bdev_passthru.a 00:02:05.725 SO libspdk_bdev_gpt.so.6.0 00:02:05.725 SYMLINK libspdk_bdev_split.so 00:02:05.725 SO libspdk_bdev_passthru.so.6.0 00:02:05.725 SYMLINK libspdk_bdev_null.so 00:02:05.725 SYMLINK libspdk_bdev_ftl.so 00:02:05.725 LIB libspdk_bdev_aio.a 00:02:05.725 LIB libspdk_bdev_delay.a 00:02:05.725 LIB libspdk_bdev_zone_block.a 00:02:05.725 SYMLINK libspdk_bdev_error.so 00:02:05.725 LIB libspdk_bdev_iscsi.a 00:02:05.725 SO libspdk_bdev_aio.so.6.0 00:02:05.725 SO libspdk_bdev_delay.so.6.0 00:02:05.725 SYMLINK libspdk_bdev_gpt.so 00:02:05.725 LIB libspdk_bdev_malloc.a 00:02:05.725 SO libspdk_bdev_zone_block.so.6.0 00:02:05.725 SYMLINK libspdk_bdev_passthru.so 00:02:05.725 SO libspdk_bdev_iscsi.so.6.0 00:02:05.725 SO libspdk_bdev_malloc.so.6.0 00:02:05.725 SYMLINK libspdk_bdev_delay.so 00:02:05.725 SYMLINK libspdk_bdev_aio.so 00:02:05.725 SYMLINK libspdk_bdev_zone_block.so 00:02:05.725 SYMLINK libspdk_bdev_iscsi.so 00:02:05.983 SYMLINK libspdk_bdev_malloc.so 00:02:05.983 LIB libspdk_bdev_lvol.a 00:02:05.983 LIB libspdk_bdev_virtio.a 00:02:05.983 SO libspdk_bdev_lvol.so.6.0 00:02:05.983 SO libspdk_bdev_virtio.so.6.0 00:02:05.983 SYMLINK libspdk_bdev_lvol.so 00:02:05.983 SYMLINK libspdk_bdev_virtio.so 00:02:06.241 LIB libspdk_bdev_raid.a 00:02:06.499 SO libspdk_bdev_raid.so.6.0 00:02:06.499 SYMLINK libspdk_bdev_raid.so 00:02:07.432 LIB libspdk_bdev_nvme.a 00:02:07.432 SO libspdk_bdev_nvme.so.7.0 00:02:07.690 SYMLINK libspdk_bdev_nvme.so 00:02:07.948 CC module/event/subsystems/keyring/keyring.o 00:02:07.948 CC module/event/subsystems/iobuf/iobuf.o 00:02:07.948 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:07.948 CC module/event/subsystems/sock/sock.o 00:02:07.948 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:07.948 CC module/event/subsystems/vmd/vmd.o 00:02:07.948 CC module/event/subsystems/scheduler/scheduler.o 00:02:07.948 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:07.948 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:08.206 LIB libspdk_event_keyring.a 00:02:08.206 LIB libspdk_event_vhost_blk.a 00:02:08.206 LIB libspdk_event_scheduler.a 00:02:08.206 LIB libspdk_event_vmd.a 00:02:08.206 LIB libspdk_event_vfu_tgt.a 00:02:08.206 LIB libspdk_event_sock.a 00:02:08.206 LIB libspdk_event_iobuf.a 00:02:08.206 SO libspdk_event_keyring.so.1.0 00:02:08.206 SO libspdk_event_vhost_blk.so.3.0 00:02:08.206 SO libspdk_event_vfu_tgt.so.3.0 00:02:08.206 SO libspdk_event_scheduler.so.4.0 00:02:08.206 SO libspdk_event_vmd.so.6.0 00:02:08.206 SO libspdk_event_sock.so.5.0 00:02:08.206 SO libspdk_event_iobuf.so.3.0 00:02:08.206 SYMLINK libspdk_event_keyring.so 00:02:08.206 SYMLINK libspdk_event_vhost_blk.so 00:02:08.206 SYMLINK libspdk_event_vfu_tgt.so 00:02:08.206 SYMLINK libspdk_event_scheduler.so 00:02:08.206 SYMLINK libspdk_event_sock.so 00:02:08.206 SYMLINK libspdk_event_vmd.so 00:02:08.206 SYMLINK libspdk_event_iobuf.so 00:02:08.464 CC module/event/subsystems/accel/accel.o 00:02:08.464 LIB libspdk_event_accel.a 00:02:08.464 SO libspdk_event_accel.so.6.0 00:02:08.723 SYMLINK libspdk_event_accel.so 00:02:08.723 CC module/event/subsystems/bdev/bdev.o 00:02:08.981 LIB libspdk_event_bdev.a 00:02:08.981 SO libspdk_event_bdev.so.6.0 00:02:08.981 SYMLINK libspdk_event_bdev.so 00:02:09.239 CC module/event/subsystems/scsi/scsi.o 00:02:09.239 CC module/event/subsystems/nbd/nbd.o 00:02:09.240 CC module/event/subsystems/ublk/ublk.o 00:02:09.240 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:09.240 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:09.240 LIB libspdk_event_ublk.a 00:02:09.240 LIB libspdk_event_nbd.a 00:02:09.240 LIB libspdk_event_scsi.a 00:02:09.240 SO libspdk_event_ublk.so.3.0 00:02:09.240 SO libspdk_event_nbd.so.6.0 00:02:09.497 SO libspdk_event_scsi.so.6.0 00:02:09.497 SYMLINK libspdk_event_nbd.so 00:02:09.497 SYMLINK libspdk_event_ublk.so 00:02:09.497 SYMLINK libspdk_event_scsi.so 00:02:09.497 LIB libspdk_event_nvmf.a 00:02:09.497 SO libspdk_event_nvmf.so.6.0 00:02:09.497 SYMLINK libspdk_event_nvmf.so 00:02:09.497 CC module/event/subsystems/iscsi/iscsi.o 00:02:09.497 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:09.755 LIB libspdk_event_vhost_scsi.a 00:02:09.755 LIB libspdk_event_iscsi.a 00:02:09.755 SO libspdk_event_vhost_scsi.so.3.0 00:02:09.755 SO libspdk_event_iscsi.so.6.0 00:02:09.755 SYMLINK libspdk_event_vhost_scsi.so 00:02:09.755 SYMLINK libspdk_event_iscsi.so 00:02:10.012 SO libspdk.so.6.0 00:02:10.012 SYMLINK libspdk.so 00:02:10.012 CC app/trace_record/trace_record.o 00:02:10.012 CC app/spdk_top/spdk_top.o 00:02:10.012 CC app/spdk_nvme_perf/perf.o 00:02:10.012 CC app/spdk_lspci/spdk_lspci.o 00:02:10.012 CC app/spdk_nvme_discover/discovery_aer.o 00:02:10.012 CXX app/trace/trace.o 00:02:10.012 CC app/spdk_nvme_identify/identify.o 00:02:10.012 TEST_HEADER include/spdk/accel.h 00:02:10.012 CC test/rpc_client/rpc_client_test.o 00:02:10.012 TEST_HEADER include/spdk/accel_module.h 00:02:10.012 TEST_HEADER include/spdk/assert.h 00:02:10.012 TEST_HEADER include/spdk/barrier.h 00:02:10.012 TEST_HEADER include/spdk/base64.h 00:02:10.012 TEST_HEADER include/spdk/bdev.h 00:02:10.012 TEST_HEADER include/spdk/bdev_module.h 00:02:10.012 TEST_HEADER include/spdk/bdev_zone.h 00:02:10.012 TEST_HEADER include/spdk/bit_array.h 00:02:10.012 TEST_HEADER include/spdk/bit_pool.h 00:02:10.012 TEST_HEADER include/spdk/blob_bdev.h 00:02:10.012 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:10.012 TEST_HEADER include/spdk/blobfs.h 00:02:10.012 TEST_HEADER include/spdk/blob.h 00:02:10.012 TEST_HEADER include/spdk/conf.h 00:02:10.012 TEST_HEADER include/spdk/config.h 00:02:10.012 TEST_HEADER include/spdk/cpuset.h 00:02:10.012 TEST_HEADER include/spdk/crc16.h 00:02:10.012 TEST_HEADER include/spdk/crc32.h 00:02:10.012 TEST_HEADER include/spdk/dif.h 00:02:10.012 TEST_HEADER include/spdk/crc64.h 00:02:10.012 TEST_HEADER include/spdk/dma.h 00:02:10.012 TEST_HEADER include/spdk/endian.h 00:02:10.012 TEST_HEADER include/spdk/env.h 00:02:10.012 TEST_HEADER include/spdk/env_dpdk.h 00:02:10.012 TEST_HEADER include/spdk/event.h 00:02:10.012 TEST_HEADER include/spdk/fd_group.h 00:02:10.012 TEST_HEADER include/spdk/fd.h 00:02:10.012 TEST_HEADER include/spdk/file.h 00:02:10.012 TEST_HEADER include/spdk/ftl.h 00:02:10.012 TEST_HEADER include/spdk/gpt_spec.h 00:02:10.012 TEST_HEADER include/spdk/hexlify.h 00:02:10.012 TEST_HEADER include/spdk/histogram_data.h 00:02:10.012 TEST_HEADER include/spdk/idxd.h 00:02:10.012 TEST_HEADER include/spdk/idxd_spec.h 00:02:10.012 TEST_HEADER include/spdk/init.h 00:02:10.012 TEST_HEADER include/spdk/ioat.h 00:02:10.012 TEST_HEADER include/spdk/ioat_spec.h 00:02:10.012 TEST_HEADER include/spdk/iscsi_spec.h 00:02:10.012 TEST_HEADER include/spdk/json.h 00:02:10.012 TEST_HEADER include/spdk/keyring.h 00:02:10.012 TEST_HEADER include/spdk/jsonrpc.h 00:02:10.012 TEST_HEADER include/spdk/keyring_module.h 00:02:10.012 TEST_HEADER include/spdk/likely.h 00:02:10.012 TEST_HEADER include/spdk/log.h 00:02:10.012 TEST_HEADER include/spdk/lvol.h 00:02:10.012 TEST_HEADER include/spdk/memory.h 00:02:10.012 TEST_HEADER include/spdk/mmio.h 00:02:10.012 TEST_HEADER include/spdk/nbd.h 00:02:10.012 TEST_HEADER include/spdk/notify.h 00:02:10.012 TEST_HEADER include/spdk/net.h 00:02:10.012 TEST_HEADER include/spdk/nvme.h 00:02:10.013 TEST_HEADER include/spdk/nvme_intel.h 00:02:10.013 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:10.013 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:10.013 TEST_HEADER include/spdk/nvme_spec.h 00:02:10.013 TEST_HEADER include/spdk/nvme_zns.h 00:02:10.013 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:10.013 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:10.013 TEST_HEADER include/spdk/nvmf.h 00:02:10.013 TEST_HEADER include/spdk/nvmf_spec.h 00:02:10.013 TEST_HEADER include/spdk/nvmf_transport.h 00:02:10.013 TEST_HEADER include/spdk/opal.h 00:02:10.013 TEST_HEADER include/spdk/opal_spec.h 00:02:10.013 TEST_HEADER include/spdk/pci_ids.h 00:02:10.304 TEST_HEADER include/spdk/pipe.h 00:02:10.304 TEST_HEADER include/spdk/queue.h 00:02:10.304 TEST_HEADER include/spdk/reduce.h 00:02:10.304 TEST_HEADER include/spdk/rpc.h 00:02:10.304 TEST_HEADER include/spdk/scheduler.h 00:02:10.304 TEST_HEADER include/spdk/scsi.h 00:02:10.304 TEST_HEADER include/spdk/scsi_spec.h 00:02:10.304 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:10.304 TEST_HEADER include/spdk/sock.h 00:02:10.304 TEST_HEADER include/spdk/stdinc.h 00:02:10.304 TEST_HEADER include/spdk/string.h 00:02:10.304 TEST_HEADER include/spdk/thread.h 00:02:10.304 TEST_HEADER include/spdk/trace.h 00:02:10.304 TEST_HEADER include/spdk/trace_parser.h 00:02:10.304 TEST_HEADER include/spdk/tree.h 00:02:10.304 TEST_HEADER include/spdk/ublk.h 00:02:10.304 TEST_HEADER include/spdk/util.h 00:02:10.304 TEST_HEADER include/spdk/uuid.h 00:02:10.304 CC app/spdk_dd/spdk_dd.o 00:02:10.304 TEST_HEADER include/spdk/version.h 00:02:10.304 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:10.304 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:10.304 TEST_HEADER include/spdk/vhost.h 00:02:10.304 TEST_HEADER include/spdk/vmd.h 00:02:10.304 TEST_HEADER include/spdk/xor.h 00:02:10.304 TEST_HEADER include/spdk/zipf.h 00:02:10.304 CXX test/cpp_headers/accel.o 00:02:10.304 CXX test/cpp_headers/accel_module.o 00:02:10.304 CXX test/cpp_headers/assert.o 00:02:10.304 CXX test/cpp_headers/barrier.o 00:02:10.304 CXX test/cpp_headers/base64.o 00:02:10.304 CXX test/cpp_headers/bdev.o 00:02:10.304 CXX test/cpp_headers/bdev_module.o 00:02:10.304 CXX test/cpp_headers/bdev_zone.o 00:02:10.304 CXX test/cpp_headers/bit_array.o 00:02:10.304 CXX test/cpp_headers/bit_pool.o 00:02:10.304 CXX test/cpp_headers/blob_bdev.o 00:02:10.304 CXX test/cpp_headers/blobfs_bdev.o 00:02:10.304 CXX test/cpp_headers/blobfs.o 00:02:10.304 CXX test/cpp_headers/blob.o 00:02:10.304 CXX test/cpp_headers/conf.o 00:02:10.304 CXX test/cpp_headers/config.o 00:02:10.304 CXX test/cpp_headers/cpuset.o 00:02:10.304 CXX test/cpp_headers/crc16.o 00:02:10.304 CC app/nvmf_tgt/nvmf_main.o 00:02:10.304 CC app/iscsi_tgt/iscsi_tgt.o 00:02:10.304 CXX test/cpp_headers/crc32.o 00:02:10.304 CC examples/util/zipf/zipf.o 00:02:10.304 CC test/env/vtophys/vtophys.o 00:02:10.304 CC examples/ioat/verify/verify.o 00:02:10.304 CC test/app/stub/stub.o 00:02:10.305 CC test/app/histogram_perf/histogram_perf.o 00:02:10.305 CC test/thread/poller_perf/poller_perf.o 00:02:10.305 CC test/app/jsoncat/jsoncat.o 00:02:10.305 CC app/fio/nvme/fio_plugin.o 00:02:10.305 CC examples/ioat/perf/perf.o 00:02:10.305 CC app/spdk_tgt/spdk_tgt.o 00:02:10.305 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:10.305 CC test/env/memory/memory_ut.o 00:02:10.305 CC test/env/pci/pci_ut.o 00:02:10.305 CC test/dma/test_dma/test_dma.o 00:02:10.305 CC test/app/bdev_svc/bdev_svc.o 00:02:10.305 CC app/fio/bdev/fio_plugin.o 00:02:10.305 CC test/env/mem_callbacks/mem_callbacks.o 00:02:10.305 LINK spdk_lspci 00:02:10.574 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:10.574 LINK rpc_client_test 00:02:10.574 LINK spdk_nvme_discover 00:02:10.574 LINK histogram_perf 00:02:10.574 LINK jsoncat 00:02:10.574 CXX test/cpp_headers/crc64.o 00:02:10.574 LINK interrupt_tgt 00:02:10.574 LINK vtophys 00:02:10.574 LINK poller_perf 00:02:10.574 LINK nvmf_tgt 00:02:10.574 LINK zipf 00:02:10.574 CXX test/cpp_headers/dif.o 00:02:10.574 CXX test/cpp_headers/dma.o 00:02:10.574 CXX test/cpp_headers/endian.o 00:02:10.574 CXX test/cpp_headers/env_dpdk.o 00:02:10.574 LINK env_dpdk_post_init 00:02:10.574 CXX test/cpp_headers/env.o 00:02:10.574 CXX test/cpp_headers/event.o 00:02:10.574 CXX test/cpp_headers/fd_group.o 00:02:10.574 CXX test/cpp_headers/fd.o 00:02:10.574 LINK spdk_trace_record 00:02:10.574 CXX test/cpp_headers/file.o 00:02:10.574 CXX test/cpp_headers/ftl.o 00:02:10.574 LINK stub 00:02:10.574 CXX test/cpp_headers/gpt_spec.o 00:02:10.574 CXX test/cpp_headers/hexlify.o 00:02:10.574 CXX test/cpp_headers/histogram_data.o 00:02:10.574 LINK bdev_svc 00:02:10.574 LINK iscsi_tgt 00:02:10.574 CXX test/cpp_headers/idxd.o 00:02:10.834 LINK ioat_perf 00:02:10.834 LINK spdk_tgt 00:02:10.834 LINK verify 00:02:10.834 CXX test/cpp_headers/idxd_spec.o 00:02:10.834 CXX test/cpp_headers/init.o 00:02:10.834 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:10.834 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:10.834 CXX test/cpp_headers/ioat.o 00:02:10.834 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:10.834 CXX test/cpp_headers/ioat_spec.o 00:02:10.834 CXX test/cpp_headers/iscsi_spec.o 00:02:10.834 LINK spdk_dd 00:02:10.834 CXX test/cpp_headers/json.o 00:02:10.834 CXX test/cpp_headers/jsonrpc.o 00:02:10.834 CXX test/cpp_headers/keyring.o 00:02:11.104 CXX test/cpp_headers/keyring_module.o 00:02:11.104 CXX test/cpp_headers/likely.o 00:02:11.104 LINK spdk_trace 00:02:11.104 LINK pci_ut 00:02:11.104 CXX test/cpp_headers/log.o 00:02:11.104 CXX test/cpp_headers/lvol.o 00:02:11.104 CXX test/cpp_headers/memory.o 00:02:11.104 CXX test/cpp_headers/mmio.o 00:02:11.104 CXX test/cpp_headers/nbd.o 00:02:11.104 CXX test/cpp_headers/net.o 00:02:11.104 CXX test/cpp_headers/notify.o 00:02:11.104 CXX test/cpp_headers/nvme.o 00:02:11.104 CXX test/cpp_headers/nvme_intel.o 00:02:11.104 CXX test/cpp_headers/nvme_ocssd.o 00:02:11.104 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:11.104 CXX test/cpp_headers/nvme_spec.o 00:02:11.104 CXX test/cpp_headers/nvme_zns.o 00:02:11.104 CXX test/cpp_headers/nvmf_cmd.o 00:02:11.104 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:11.104 LINK test_dma 00:02:11.104 CXX test/cpp_headers/nvmf.o 00:02:11.104 CXX test/cpp_headers/nvmf_spec.o 00:02:11.104 CXX test/cpp_headers/nvmf_transport.o 00:02:11.104 CXX test/cpp_headers/opal.o 00:02:11.104 CXX test/cpp_headers/opal_spec.o 00:02:11.369 CXX test/cpp_headers/pci_ids.o 00:02:11.369 LINK nvme_fuzz 00:02:11.369 CXX test/cpp_headers/pipe.o 00:02:11.369 CXX test/cpp_headers/queue.o 00:02:11.369 CC examples/sock/hello_world/hello_sock.o 00:02:11.369 CC test/event/event_perf/event_perf.o 00:02:11.369 CC examples/vmd/lsvmd/lsvmd.o 00:02:11.369 CC examples/vmd/led/led.o 00:02:11.369 LINK spdk_nvme 00:02:11.369 CXX test/cpp_headers/reduce.o 00:02:11.369 CC test/event/reactor/reactor.o 00:02:11.369 LINK spdk_bdev 00:02:11.369 CC examples/idxd/perf/perf.o 00:02:11.369 CXX test/cpp_headers/rpc.o 00:02:11.369 CC test/event/reactor_perf/reactor_perf.o 00:02:11.369 CC examples/thread/thread/thread_ex.o 00:02:11.369 CC test/event/app_repeat/app_repeat.o 00:02:11.369 CXX test/cpp_headers/scheduler.o 00:02:11.369 CXX test/cpp_headers/scsi.o 00:02:11.369 CXX test/cpp_headers/scsi_spec.o 00:02:11.369 CXX test/cpp_headers/sock.o 00:02:11.369 CXX test/cpp_headers/stdinc.o 00:02:11.369 CXX test/cpp_headers/string.o 00:02:11.369 CXX test/cpp_headers/thread.o 00:02:11.369 CXX test/cpp_headers/trace.o 00:02:11.369 CC test/event/scheduler/scheduler.o 00:02:11.369 CXX test/cpp_headers/trace_parser.o 00:02:11.369 CXX test/cpp_headers/tree.o 00:02:11.369 CXX test/cpp_headers/ublk.o 00:02:11.636 CXX test/cpp_headers/util.o 00:02:11.636 CXX test/cpp_headers/uuid.o 00:02:11.636 CXX test/cpp_headers/version.o 00:02:11.636 CXX test/cpp_headers/vfio_user_pci.o 00:02:11.636 CXX test/cpp_headers/vfio_user_spec.o 00:02:11.636 CXX test/cpp_headers/vhost.o 00:02:11.636 CXX test/cpp_headers/vmd.o 00:02:11.636 CXX test/cpp_headers/xor.o 00:02:11.636 CXX test/cpp_headers/zipf.o 00:02:11.636 LINK spdk_nvme_perf 00:02:11.636 LINK lsvmd 00:02:11.636 LINK mem_callbacks 00:02:11.636 CC app/vhost/vhost.o 00:02:11.636 LINK led 00:02:11.636 LINK reactor 00:02:11.636 LINK event_perf 00:02:11.636 LINK reactor_perf 00:02:11.636 LINK spdk_nvme_identify 00:02:11.636 LINK app_repeat 00:02:11.636 LINK vhost_fuzz 00:02:11.898 LINK spdk_top 00:02:11.898 LINK hello_sock 00:02:11.898 CC test/nvme/aer/aer.o 00:02:11.898 CC test/nvme/sgl/sgl.o 00:02:11.898 CC test/nvme/e2edp/nvme_dp.o 00:02:11.898 CC test/nvme/reserve/reserve.o 00:02:11.898 CC test/nvme/overhead/overhead.o 00:02:11.898 CC test/nvme/startup/startup.o 00:02:11.898 CC test/nvme/err_injection/err_injection.o 00:02:11.898 CC test/nvme/simple_copy/simple_copy.o 00:02:11.898 CC test/accel/dif/dif.o 00:02:11.898 CC test/nvme/reset/reset.o 00:02:11.898 LINK thread 00:02:11.898 CC test/blobfs/mkfs/mkfs.o 00:02:11.898 CC test/nvme/boot_partition/boot_partition.o 00:02:11.898 CC test/nvme/connect_stress/connect_stress.o 00:02:11.898 LINK scheduler 00:02:11.898 CC test/nvme/fused_ordering/fused_ordering.o 00:02:11.898 CC test/nvme/compliance/nvme_compliance.o 00:02:11.898 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:11.898 CC test/nvme/fdp/fdp.o 00:02:11.898 CC test/nvme/cuse/cuse.o 00:02:11.898 CC test/lvol/esnap/esnap.o 00:02:11.898 LINK idxd_perf 00:02:11.898 LINK vhost 00:02:12.156 LINK startup 00:02:12.156 LINK err_injection 00:02:12.156 LINK reserve 00:02:12.156 LINK connect_stress 00:02:12.156 LINK doorbell_aers 00:02:12.156 LINK boot_partition 00:02:12.156 LINK sgl 00:02:12.156 LINK aer 00:02:12.156 LINK mkfs 00:02:12.156 LINK fused_ordering 00:02:12.156 LINK reset 00:02:12.156 CC examples/nvme/hello_world/hello_world.o 00:02:12.156 CC examples/nvme/hotplug/hotplug.o 00:02:12.156 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:12.156 CC examples/nvme/arbitration/arbitration.o 00:02:12.156 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:12.156 CC examples/nvme/reconnect/reconnect.o 00:02:12.156 LINK simple_copy 00:02:12.156 CC examples/nvme/abort/abort.o 00:02:12.156 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:12.415 LINK memory_ut 00:02:12.415 LINK nvme_compliance 00:02:12.415 LINK nvme_dp 00:02:12.415 LINK fdp 00:02:12.415 LINK dif 00:02:12.415 CC examples/accel/perf/accel_perf.o 00:02:12.415 LINK overhead 00:02:12.415 CC examples/blob/cli/blobcli.o 00:02:12.415 CC examples/blob/hello_world/hello_blob.o 00:02:12.415 LINK cmb_copy 00:02:12.415 LINK pmr_persistence 00:02:12.673 LINK hello_world 00:02:12.673 LINK hotplug 00:02:12.673 LINK abort 00:02:12.673 LINK arbitration 00:02:12.673 LINK hello_blob 00:02:12.673 LINK reconnect 00:02:12.930 CC test/bdev/bdevio/bdevio.o 00:02:12.930 LINK nvme_manage 00:02:12.930 LINK accel_perf 00:02:12.930 LINK blobcli 00:02:13.187 LINK iscsi_fuzz 00:02:13.187 LINK bdevio 00:02:13.187 CC examples/bdev/hello_world/hello_bdev.o 00:02:13.187 CC examples/bdev/bdevperf/bdevperf.o 00:02:13.446 LINK hello_bdev 00:02:13.446 LINK cuse 00:02:14.013 LINK bdevperf 00:02:14.579 CC examples/nvmf/nvmf/nvmf.o 00:02:14.839 LINK nvmf 00:02:17.374 LINK esnap 00:02:17.374 00:02:17.374 real 0m49.083s 00:02:17.374 user 10m7.155s 00:02:17.374 sys 2m26.949s 00:02:17.374 14:03:46 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:17.374 14:03:46 make -- common/autotest_common.sh@10 -- $ set +x 00:02:17.374 ************************************ 00:02:17.374 END TEST make 00:02:17.374 ************************************ 00:02:17.374 14:03:46 -- common/autotest_common.sh@1142 -- $ return 0 00:02:17.374 14:03:46 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:17.374 14:03:46 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:17.374 14:03:46 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:17.374 14:03:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:17.374 14:03:46 -- pm/common@44 -- $ pid=705832 00:02:17.374 14:03:46 -- pm/common@50 -- $ kill -TERM 705832 00:02:17.374 14:03:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:17.374 14:03:46 -- pm/common@44 -- $ pid=705834 00:02:17.374 14:03:46 -- pm/common@50 -- $ kill -TERM 705834 00:02:17.374 14:03:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:17.374 14:03:46 -- pm/common@44 -- $ pid=705836 00:02:17.374 14:03:46 -- pm/common@50 -- $ kill -TERM 705836 00:02:17.374 14:03:46 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:17.374 14:03:46 -- pm/common@44 -- $ pid=705863 00:02:17.374 14:03:46 -- pm/common@50 -- $ sudo -E kill -TERM 705863 00:02:17.374 14:03:46 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:17.374 14:03:46 -- nvmf/common.sh@7 -- # uname -s 00:02:17.374 14:03:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:17.374 14:03:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:17.374 14:03:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:17.374 14:03:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:17.374 14:03:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:17.374 14:03:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:17.374 14:03:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:17.374 14:03:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:17.374 14:03:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:17.374 14:03:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:17.374 14:03:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:17.374 14:03:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:17.374 14:03:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:17.374 14:03:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:17.374 14:03:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:17.374 14:03:46 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:17.374 14:03:46 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:17.374 14:03:46 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:17.374 14:03:46 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.374 14:03:46 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.374 14:03:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.374 14:03:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.374 14:03:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.374 14:03:46 -- paths/export.sh@5 -- # export PATH 00:02:17.374 14:03:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.374 14:03:46 -- nvmf/common.sh@47 -- # : 0 00:02:17.374 14:03:46 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:17.374 14:03:46 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:17.374 14:03:46 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:17.374 14:03:46 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:17.374 14:03:46 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:17.374 14:03:46 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:17.374 14:03:46 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:17.374 14:03:46 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:17.374 14:03:46 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:17.374 14:03:46 -- spdk/autotest.sh@32 -- # uname -s 00:02:17.374 14:03:46 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:17.374 14:03:46 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:17.374 14:03:46 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.374 14:03:46 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:17.374 14:03:46 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:17.374 14:03:46 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:17.374 14:03:46 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:17.374 14:03:46 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:17.374 14:03:46 -- spdk/autotest.sh@48 -- # udevadm_pid=761960 00:02:17.374 14:03:46 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:17.374 14:03:46 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:17.374 14:03:46 -- pm/common@17 -- # local monitor 00:02:17.374 14:03:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.374 14:03:46 -- pm/common@21 -- # date +%s 00:02:17.374 14:03:46 -- pm/common@21 -- # date +%s 00:02:17.374 14:03:46 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.375 14:03:46 -- pm/common@25 -- # sleep 1 00:02:17.375 14:03:46 -- pm/common@21 -- # date +%s 00:02:17.375 14:03:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721909026 00:02:17.375 14:03:46 -- pm/common@21 -- # date +%s 00:02:17.375 14:03:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721909026 00:02:17.375 14:03:46 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721909026 00:02:17.375 14:03:46 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721909026 00:02:17.375 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721909026_collect-vmstat.pm.log 00:02:17.375 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721909026_collect-cpu-load.pm.log 00:02:17.375 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721909026_collect-cpu-temp.pm.log 00:02:17.375 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721909026_collect-bmc-pm.bmc.pm.log 00:02:18.752 14:03:47 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:18.752 14:03:47 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:18.752 14:03:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:18.752 14:03:47 -- common/autotest_common.sh@10 -- # set +x 00:02:18.752 14:03:47 -- spdk/autotest.sh@59 -- # create_test_list 00:02:18.752 14:03:47 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:18.752 14:03:47 -- common/autotest_common.sh@10 -- # set +x 00:02:18.752 14:03:48 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:18.753 14:03:48 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.753 14:03:48 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.753 14:03:48 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:18.753 14:03:48 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:18.753 14:03:48 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:18.753 14:03:48 -- common/autotest_common.sh@1455 -- # uname 00:02:18.753 14:03:48 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:18.753 14:03:48 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:18.753 14:03:48 -- common/autotest_common.sh@1475 -- # uname 00:02:18.753 14:03:48 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:18.753 14:03:48 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:18.753 14:03:48 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:18.753 14:03:48 -- spdk/autotest.sh@72 -- # hash lcov 00:02:18.753 14:03:48 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:18.753 14:03:48 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:18.753 --rc lcov_branch_coverage=1 00:02:18.753 --rc lcov_function_coverage=1 00:02:18.753 --rc genhtml_branch_coverage=1 00:02:18.753 --rc genhtml_function_coverage=1 00:02:18.753 --rc genhtml_legend=1 00:02:18.753 --rc geninfo_all_blocks=1 00:02:18.753 ' 00:02:18.753 14:03:48 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:18.753 --rc lcov_branch_coverage=1 00:02:18.753 --rc lcov_function_coverage=1 00:02:18.753 --rc genhtml_branch_coverage=1 00:02:18.753 --rc genhtml_function_coverage=1 00:02:18.753 --rc genhtml_legend=1 00:02:18.753 --rc geninfo_all_blocks=1 00:02:18.753 ' 00:02:18.753 14:03:48 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:18.753 --rc lcov_branch_coverage=1 00:02:18.753 --rc lcov_function_coverage=1 00:02:18.753 --rc genhtml_branch_coverage=1 00:02:18.753 --rc genhtml_function_coverage=1 00:02:18.753 --rc genhtml_legend=1 00:02:18.753 --rc geninfo_all_blocks=1 00:02:18.753 --no-external' 00:02:18.753 14:03:48 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:18.753 --rc lcov_branch_coverage=1 00:02:18.753 --rc lcov_function_coverage=1 00:02:18.753 --rc genhtml_branch_coverage=1 00:02:18.753 --rc genhtml_function_coverage=1 00:02:18.753 --rc genhtml_legend=1 00:02:18.753 --rc geninfo_all_blocks=1 00:02:18.753 --no-external' 00:02:18.753 14:03:48 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:18.753 lcov: LCOV version 1.14 00:02:18.753 14:03:48 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:20.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:20.193 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:20.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:20.454 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:20.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:20.455 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:35.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:35.356 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:53.495 14:04:21 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:53.495 14:04:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:53.495 14:04:21 -- common/autotest_common.sh@10 -- # set +x 00:02:53.495 14:04:21 -- spdk/autotest.sh@91 -- # rm -f 00:02:53.495 14:04:21 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.495 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:53.495 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:53.495 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:53.495 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:53.495 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:53.495 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:53.495 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:53.495 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:53.495 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:53.495 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:53.495 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:53.495 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:53.495 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:53.495 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:53.495 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:53.495 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:53.495 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:53.495 14:04:22 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:53.495 14:04:22 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:53.495 14:04:22 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:53.495 14:04:22 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:53.495 14:04:22 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:53.495 14:04:22 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:53.495 14:04:22 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:53.495 14:04:22 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:53.495 14:04:22 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:53.495 14:04:22 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:53.495 14:04:22 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:53.495 14:04:22 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:53.495 14:04:22 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:53.495 14:04:22 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:53.495 14:04:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:53.495 No valid GPT data, bailing 00:02:53.495 14:04:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:53.495 14:04:23 -- scripts/common.sh@391 -- # pt= 00:02:53.495 14:04:23 -- scripts/common.sh@392 -- # return 1 00:02:53.495 14:04:23 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:53.495 1+0 records in 00:02:53.495 1+0 records out 00:02:53.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00201586 s, 520 MB/s 00:02:53.495 14:04:23 -- spdk/autotest.sh@118 -- # sync 00:02:53.495 14:04:23 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:53.495 14:04:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:53.495 14:04:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:55.398 14:04:24 -- spdk/autotest.sh@124 -- # uname -s 00:02:55.399 14:04:24 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:55.399 14:04:24 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:55.399 14:04:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.399 14:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.399 14:04:24 -- common/autotest_common.sh@10 -- # set +x 00:02:55.399 ************************************ 00:02:55.399 START TEST setup.sh 00:02:55.399 ************************************ 00:02:55.399 14:04:24 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:55.399 * Looking for test storage... 00:02:55.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:55.399 14:04:24 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:55.399 14:04:24 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:55.399 14:04:24 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:55.399 14:04:24 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:55.399 14:04:24 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:55.399 14:04:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:55.399 ************************************ 00:02:55.399 START TEST acl 00:02:55.399 ************************************ 00:02:55.399 14:04:24 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:55.399 * Looking for test storage... 00:02:55.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:55.399 14:04:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:55.399 14:04:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:55.399 14:04:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:55.399 14:04:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:55.657 14:04:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:55.657 14:04:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:55.657 14:04:25 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:55.657 14:04:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:55.657 14:04:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:55.657 14:04:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:55.657 14:04:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:55.657 14:04:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:55.657 14:04:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:55.657 14:04:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:55.657 14:04:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.657 14:04:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.029 14:04:26 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:57.029 14:04:26 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:57.029 14:04:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:57.029 14:04:26 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:57.029 14:04:26 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.029 14:04:26 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:58.411 Hugepages 00:02:58.411 node hugesize free / total 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 00:02:58.411 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.411 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:58.412 14:04:27 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:58.412 14:04:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:58.412 14:04:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:58.412 14:04:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:58.412 ************************************ 00:02:58.412 START TEST denied 00:02:58.412 ************************************ 00:02:58.412 14:04:27 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:58.412 14:04:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:58.412 14:04:27 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:58.412 14:04:27 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:58.412 14:04:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.412 14:04:27 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.787 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.787 14:04:29 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.320 00:03:02.320 real 0m3.973s 00:03:02.320 user 0m1.190s 00:03:02.320 sys 0m1.831s 00:03:02.320 14:04:31 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:02.320 14:04:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:02.320 ************************************ 00:03:02.320 END TEST denied 00:03:02.320 ************************************ 00:03:02.320 14:04:31 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:02.320 14:04:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.320 14:04:31 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.320 14:04:31 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.320 14:04:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:02.320 ************************************ 00:03:02.320 START TEST allowed 00:03:02.320 ************************************ 00:03:02.320 14:04:31 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:02.320 14:04:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:02.320 14:04:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:02.320 14:04:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.320 14:04:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:02.320 14:04:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:04.857 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:04.857 14:04:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:04.857 14:04:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:04.857 14:04:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:04.857 14:04:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:04.857 14:04:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.233 00:03:06.233 real 0m4.027s 00:03:06.233 user 0m1.082s 00:03:06.233 sys 0m1.781s 00:03:06.233 14:04:35 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.233 14:04:35 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:06.233 ************************************ 00:03:06.233 END TEST allowed 00:03:06.233 ************************************ 00:03:06.233 14:04:35 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:06.233 00:03:06.233 real 0m10.879s 00:03:06.233 user 0m3.414s 00:03:06.233 sys 0m5.423s 00:03:06.233 14:04:35 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.233 14:04:35 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:06.233 ************************************ 00:03:06.233 END TEST acl 00:03:06.233 ************************************ 00:03:06.491 14:04:35 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:06.491 14:04:35 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.491 14:04:35 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.491 14:04:35 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.491 14:04:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:06.491 ************************************ 00:03:06.491 START TEST hugepages 00:03:06.491 ************************************ 00:03:06.491 14:04:35 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:06.492 * Looking for test storage... 00:03:06.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44717500 kB' 'MemAvailable: 48184400 kB' 'Buffers: 2704 kB' 'Cached: 9386116 kB' 'SwapCached: 0 kB' 'Active: 6357032 kB' 'Inactive: 3490896 kB' 'Active(anon): 5970720 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 462192 kB' 'Mapped: 199748 kB' 'Shmem: 5511612 kB' 'KReclaimable: 167064 kB' 'Slab: 495436 kB' 'SReclaimable: 167064 kB' 'SUnreclaim: 328372 kB' 'KernelStack: 12768 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7054424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.492 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:06.493 14:04:35 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.493 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:06.494 14:04:36 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:06.494 14:04:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.494 14:04:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.494 14:04:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.494 ************************************ 00:03:06.494 START TEST default_setup 00:03:06.494 ************************************ 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.494 14:04:36 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.867 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.867 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:07.867 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:08.806 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:08.806 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46830224 kB' 'MemAvailable: 50297140 kB' 'Buffers: 2704 kB' 'Cached: 9386208 kB' 'SwapCached: 0 kB' 'Active: 6375520 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989208 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480768 kB' 'Mapped: 199876 kB' 'Shmem: 5511704 kB' 'KReclaimable: 167096 kB' 'Slab: 495316 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328220 kB' 'KernelStack: 12736 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7075588 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.070 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46831640 kB' 'MemAvailable: 50298556 kB' 'Buffers: 2704 kB' 'Cached: 9386208 kB' 'SwapCached: 0 kB' 'Active: 6375480 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989168 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480764 kB' 'Mapped: 199820 kB' 'Shmem: 5511704 kB' 'KReclaimable: 167096 kB' 'Slab: 495316 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328220 kB' 'KernelStack: 12768 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7075608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.071 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.072 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46828048 kB' 'MemAvailable: 50294964 kB' 'Buffers: 2704 kB' 'Cached: 9386228 kB' 'SwapCached: 0 kB' 'Active: 6376376 kB' 'Inactive: 3490896 kB' 'Active(anon): 5990064 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481592 kB' 'Mapped: 200180 kB' 'Shmem: 5511724 kB' 'KReclaimable: 167096 kB' 'Slab: 495368 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328272 kB' 'KernelStack: 12720 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7078304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.073 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.074 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:09.075 nr_hugepages=1024 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.075 resv_hugepages=0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.075 surplus_hugepages=0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.075 anon_hugepages=0 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:09.075 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46823768 kB' 'MemAvailable: 50290684 kB' 'Buffers: 2704 kB' 'Cached: 9386252 kB' 'SwapCached: 0 kB' 'Active: 6380428 kB' 'Inactive: 3490896 kB' 'Active(anon): 5994116 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485620 kB' 'Mapped: 200556 kB' 'Shmem: 5511748 kB' 'KReclaimable: 167096 kB' 'Slab: 495368 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328272 kB' 'KernelStack: 12736 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7081772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195972 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.076 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.077 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21530000 kB' 'MemUsed: 11346940 kB' 'SwapCached: 0 kB' 'Active: 4965776 kB' 'Inactive: 3355476 kB' 'Active(anon): 4698644 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171564 kB' 'Mapped: 89304 kB' 'AnonPages: 152876 kB' 'Shmem: 4548956 kB' 'KernelStack: 6712 kB' 'PageTables: 3136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271388 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189120 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.078 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:09.079 node0=1024 expecting 1024 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:09.079 00:03:09.079 real 0m2.541s 00:03:09.079 user 0m0.669s 00:03:09.079 sys 0m0.977s 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.079 14:04:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:09.079 ************************************ 00:03:09.079 END TEST default_setup 00:03:09.079 ************************************ 00:03:09.079 14:04:38 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:09.079 14:04:38 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:09.079 14:04:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.079 14:04:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.079 14:04:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.079 ************************************ 00:03:09.079 START TEST per_node_1G_alloc 00:03:09.079 ************************************ 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.079 14:04:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.457 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.457 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.457 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.457 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.457 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.457 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.457 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.457 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.457 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.458 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.458 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.458 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.458 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.458 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.458 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.458 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.458 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46820348 kB' 'MemAvailable: 50287264 kB' 'Buffers: 2704 kB' 'Cached: 9386328 kB' 'SwapCached: 0 kB' 'Active: 6375880 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989568 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481036 kB' 'Mapped: 199812 kB' 'Shmem: 5511824 kB' 'KReclaimable: 167096 kB' 'Slab: 495656 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328560 kB' 'KernelStack: 12752 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.458 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46821724 kB' 'MemAvailable: 50288640 kB' 'Buffers: 2704 kB' 'Cached: 9386332 kB' 'SwapCached: 0 kB' 'Active: 6375744 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989432 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480908 kB' 'Mapped: 199756 kB' 'Shmem: 5511828 kB' 'KReclaimable: 167096 kB' 'Slab: 495600 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328504 kB' 'KernelStack: 12784 kB' 'PageTables: 7912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.459 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.460 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.461 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46822024 kB' 'MemAvailable: 50288940 kB' 'Buffers: 2704 kB' 'Cached: 9386332 kB' 'SwapCached: 0 kB' 'Active: 6375936 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989624 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481108 kB' 'Mapped: 199756 kB' 'Shmem: 5511828 kB' 'KReclaimable: 167096 kB' 'Slab: 495656 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328560 kB' 'KernelStack: 12816 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.462 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:39 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.463 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.464 nr_hugepages=1024 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.464 resv_hugepages=0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.464 surplus_hugepages=0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.464 anon_hugepages=0 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46822384 kB' 'MemAvailable: 50289300 kB' 'Buffers: 2704 kB' 'Cached: 9386372 kB' 'SwapCached: 0 kB' 'Active: 6375744 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989432 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480836 kB' 'Mapped: 199756 kB' 'Shmem: 5511868 kB' 'KReclaimable: 167096 kB' 'Slab: 495656 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328560 kB' 'KernelStack: 12784 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.464 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.465 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22603236 kB' 'MemUsed: 10273704 kB' 'SwapCached: 0 kB' 'Active: 4964040 kB' 'Inactive: 3355476 kB' 'Active(anon): 4696908 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171604 kB' 'Mapped: 88540 kB' 'AnonPages: 151356 kB' 'Shmem: 4548996 kB' 'KernelStack: 6680 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271452 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.466 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24219244 kB' 'MemUsed: 3445528 kB' 'SwapCached: 0 kB' 'Active: 1411820 kB' 'Inactive: 135420 kB' 'Active(anon): 1292640 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 135420 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1217496 kB' 'Mapped: 111216 kB' 'AnonPages: 329976 kB' 'Shmem: 962896 kB' 'KernelStack: 6072 kB' 'PageTables: 4808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84828 kB' 'Slab: 224204 kB' 'SReclaimable: 84828 kB' 'SUnreclaim: 139376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.467 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.468 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:10.469 node0=512 expecting 512 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:10.469 node1=512 expecting 512 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:10.469 00:03:10.469 real 0m1.444s 00:03:10.469 user 0m0.591s 00:03:10.469 sys 0m0.815s 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:10.469 14:04:40 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:10.469 ************************************ 00:03:10.469 END TEST per_node_1G_alloc 00:03:10.469 ************************************ 00:03:10.469 14:04:40 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:10.469 14:04:40 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:10.469 14:04:40 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:10.469 14:04:40 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:10.469 14:04:40 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:10.728 ************************************ 00:03:10.728 START TEST even_2G_alloc 00:03:10.728 ************************************ 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:10.728 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.729 14:04:40 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:11.684 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.684 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:11.684 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.684 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.684 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.684 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.684 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.684 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.684 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.684 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:11.684 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:11.684 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:11.684 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:11.684 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:11.684 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:11.684 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:11.684 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.957 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46805844 kB' 'MemAvailable: 50272788 kB' 'Buffers: 2704 kB' 'Cached: 9386456 kB' 'SwapCached: 0 kB' 'Active: 6376256 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989944 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481288 kB' 'Mapped: 199852 kB' 'Shmem: 5511952 kB' 'KReclaimable: 167152 kB' 'Slab: 495796 kB' 'SReclaimable: 167152 kB' 'SUnreclaim: 328644 kB' 'KernelStack: 12800 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.958 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46805564 kB' 'MemAvailable: 50272508 kB' 'Buffers: 2704 kB' 'Cached: 9386460 kB' 'SwapCached: 0 kB' 'Active: 6376348 kB' 'Inactive: 3490896 kB' 'Active(anon): 5990036 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481336 kB' 'Mapped: 199844 kB' 'Shmem: 5511956 kB' 'KReclaimable: 167152 kB' 'Slab: 495804 kB' 'SReclaimable: 167152 kB' 'SUnreclaim: 328652 kB' 'KernelStack: 12768 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.959 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.960 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46805548 kB' 'MemAvailable: 50272492 kB' 'Buffers: 2704 kB' 'Cached: 9386476 kB' 'SwapCached: 0 kB' 'Active: 6376236 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989924 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481220 kB' 'Mapped: 199768 kB' 'Shmem: 5511972 kB' 'KReclaimable: 167152 kB' 'Slab: 495804 kB' 'SReclaimable: 167152 kB' 'SUnreclaim: 328652 kB' 'KernelStack: 12784 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.961 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.962 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:11.963 nr_hugepages=1024 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:11.963 resv_hugepages=0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:11.963 surplus_hugepages=0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:11.963 anon_hugepages=0 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:11.963 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46806824 kB' 'MemAvailable: 50273768 kB' 'Buffers: 2704 kB' 'Cached: 9386500 kB' 'SwapCached: 0 kB' 'Active: 6376264 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989952 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481216 kB' 'Mapped: 199768 kB' 'Shmem: 5511996 kB' 'KReclaimable: 167152 kB' 'Slab: 495804 kB' 'SReclaimable: 167152 kB' 'SUnreclaim: 328652 kB' 'KernelStack: 12784 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7076120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.964 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.965 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22595760 kB' 'MemUsed: 10281180 kB' 'SwapCached: 0 kB' 'Active: 4963848 kB' 'Inactive: 3355476 kB' 'Active(anon): 4696716 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171604 kB' 'Mapped: 88540 kB' 'AnonPages: 150888 kB' 'Shmem: 4548996 kB' 'KernelStack: 6680 kB' 'PageTables: 2952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82324 kB' 'Slab: 271616 kB' 'SReclaimable: 82324 kB' 'SUnreclaim: 189292 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.966 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24210628 kB' 'MemUsed: 3454144 kB' 'SwapCached: 0 kB' 'Active: 1412952 kB' 'Inactive: 135420 kB' 'Active(anon): 1293772 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 135420 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1217644 kB' 'Mapped: 111116 kB' 'AnonPages: 330416 kB' 'Shmem: 963044 kB' 'KernelStack: 6136 kB' 'PageTables: 4992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84828 kB' 'Slab: 224184 kB' 'SReclaimable: 84828 kB' 'SUnreclaim: 139356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.967 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.968 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:11.969 node0=512 expecting 512 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:11.969 node1=512 expecting 512 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:11.969 00:03:11.969 real 0m1.457s 00:03:11.969 user 0m0.625s 00:03:11.969 sys 0m0.794s 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:11.969 14:04:41 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:11.969 ************************************ 00:03:11.969 END TEST even_2G_alloc 00:03:11.969 ************************************ 00:03:11.969 14:04:41 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:11.969 14:04:41 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:11.969 14:04:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:11.969 14:04:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:11.969 14:04:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.229 ************************************ 00:03:12.229 START TEST odd_alloc 00:03:12.229 ************************************ 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.229 14:04:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:13.168 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.168 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:13.168 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.168 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.168 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.168 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.168 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.168 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.168 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.168 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:13.168 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:13.168 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:13.168 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:13.168 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:13.168 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:13.168 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:13.168 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.432 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46805876 kB' 'MemAvailable: 50272792 kB' 'Buffers: 2704 kB' 'Cached: 9386592 kB' 'SwapCached: 0 kB' 'Active: 6374084 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987772 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478824 kB' 'Mapped: 198984 kB' 'Shmem: 5512088 kB' 'KReclaimable: 167096 kB' 'Slab: 495408 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328312 kB' 'KernelStack: 12976 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7063328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.433 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46804608 kB' 'MemAvailable: 50271524 kB' 'Buffers: 2704 kB' 'Cached: 9386596 kB' 'SwapCached: 0 kB' 'Active: 6375424 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989112 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 479764 kB' 'Mapped: 198976 kB' 'Shmem: 5512092 kB' 'KReclaimable: 167096 kB' 'Slab: 495396 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328300 kB' 'KernelStack: 13216 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7063344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.434 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.435 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46808048 kB' 'MemAvailable: 50274964 kB' 'Buffers: 2704 kB' 'Cached: 9386616 kB' 'SwapCached: 0 kB' 'Active: 6373084 kB' 'Inactive: 3490896 kB' 'Active(anon): 5986772 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478216 kB' 'Mapped: 198940 kB' 'Shmem: 5512112 kB' 'KReclaimable: 167096 kB' 'Slab: 495460 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328364 kB' 'KernelStack: 12752 kB' 'PageTables: 7288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7061004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.436 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:42 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.437 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:13.438 nr_hugepages=1025 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:13.438 resv_hugepages=0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:13.438 surplus_hugepages=0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:13.438 anon_hugepages=0 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46806624 kB' 'MemAvailable: 50273540 kB' 'Buffers: 2704 kB' 'Cached: 9386636 kB' 'SwapCached: 0 kB' 'Active: 6372972 kB' 'Inactive: 3490896 kB' 'Active(anon): 5986660 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477644 kB' 'Mapped: 198940 kB' 'Shmem: 5512132 kB' 'KReclaimable: 167096 kB' 'Slab: 495460 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328364 kB' 'KernelStack: 12672 kB' 'PageTables: 7372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7061024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.438 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.439 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22581912 kB' 'MemUsed: 10295028 kB' 'SwapCached: 0 kB' 'Active: 4962632 kB' 'Inactive: 3355476 kB' 'Active(anon): 4695500 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171620 kB' 'Mapped: 87940 kB' 'AnonPages: 149608 kB' 'Shmem: 4549012 kB' 'KernelStack: 6680 kB' 'PageTables: 2872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271400 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.440 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.441 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 24225968 kB' 'MemUsed: 3438804 kB' 'SwapCached: 0 kB' 'Active: 1410668 kB' 'Inactive: 135420 kB' 'Active(anon): 1291488 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 135420 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1217728 kB' 'Mapped: 111000 kB' 'AnonPages: 328356 kB' 'Shmem: 963128 kB' 'KernelStack: 5992 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84828 kB' 'Slab: 224060 kB' 'SReclaimable: 84828 kB' 'SUnreclaim: 139232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.442 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.443 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:13.702 node0=512 expecting 513 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:13.702 node1=513 expecting 512 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:13.702 00:03:13.702 real 0m1.462s 00:03:13.702 user 0m0.630s 00:03:13.702 sys 0m0.796s 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.702 14:04:43 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:13.702 ************************************ 00:03:13.702 END TEST odd_alloc 00:03:13.702 ************************************ 00:03:13.702 14:04:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:13.702 14:04:43 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:13.702 14:04:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.702 14:04:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.702 14:04:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:13.702 ************************************ 00:03:13.702 START TEST custom_alloc 00:03:13.702 ************************************ 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.702 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:13.703 14:04:43 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.638 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.638 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:14.638 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.638 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.638 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.638 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.638 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.638 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.638 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.638 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:14.638 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:14.638 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:14.638 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:14.639 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:14.639 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:14.639 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:14.639 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45756632 kB' 'MemAvailable: 49223548 kB' 'Buffers: 2704 kB' 'Cached: 9386724 kB' 'SwapCached: 0 kB' 'Active: 6373220 kB' 'Inactive: 3490896 kB' 'Active(anon): 5986908 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477912 kB' 'Mapped: 198988 kB' 'Shmem: 5512220 kB' 'KReclaimable: 167096 kB' 'Slab: 495836 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328740 kB' 'KernelStack: 12752 kB' 'PageTables: 7444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7061268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.900 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.901 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45756840 kB' 'MemAvailable: 49223756 kB' 'Buffers: 2704 kB' 'Cached: 9386728 kB' 'SwapCached: 0 kB' 'Active: 6373268 kB' 'Inactive: 3490896 kB' 'Active(anon): 5986956 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477892 kB' 'Mapped: 198952 kB' 'Shmem: 5512224 kB' 'KReclaimable: 167096 kB' 'Slab: 495836 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328740 kB' 'KernelStack: 12768 kB' 'PageTables: 7480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7061288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.902 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.903 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45757452 kB' 'MemAvailable: 49224368 kB' 'Buffers: 2704 kB' 'Cached: 9386728 kB' 'SwapCached: 0 kB' 'Active: 6373364 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987052 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478024 kB' 'Mapped: 198952 kB' 'Shmem: 5512224 kB' 'KReclaimable: 167096 kB' 'Slab: 495832 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328736 kB' 'KernelStack: 12752 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7061308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.904 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.905 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:14.906 nr_hugepages=1536 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.906 resv_hugepages=0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.906 surplus_hugepages=0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.906 anon_hugepages=0 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45757452 kB' 'MemAvailable: 49224368 kB' 'Buffers: 2704 kB' 'Cached: 9386768 kB' 'SwapCached: 0 kB' 'Active: 6373244 kB' 'Inactive: 3490896 kB' 'Active(anon): 5986932 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 477864 kB' 'Mapped: 198952 kB' 'Shmem: 5512264 kB' 'KReclaimable: 167096 kB' 'Slab: 495832 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328736 kB' 'KernelStack: 12752 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7061328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.906 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.168 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22578312 kB' 'MemUsed: 10298628 kB' 'SwapCached: 0 kB' 'Active: 4962764 kB' 'Inactive: 3355476 kB' 'Active(anon): 4695632 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171624 kB' 'Mapped: 87940 kB' 'AnonPages: 149764 kB' 'Shmem: 4549016 kB' 'KernelStack: 6680 kB' 'PageTables: 2872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271652 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.169 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23178728 kB' 'MemUsed: 4486044 kB' 'SwapCached: 0 kB' 'Active: 1410680 kB' 'Inactive: 135420 kB' 'Active(anon): 1291500 kB' 'Inactive(anon): 0 kB' 'Active(file): 119180 kB' 'Inactive(file): 135420 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1217888 kB' 'Mapped: 111012 kB' 'AnonPages: 328308 kB' 'Shmem: 963288 kB' 'KernelStack: 6088 kB' 'PageTables: 4628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84828 kB' 'Slab: 224180 kB' 'SReclaimable: 84828 kB' 'SUnreclaim: 139352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.170 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.171 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.172 node0=512 expecting 512 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:15.172 node1=1024 expecting 1024 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:15.172 00:03:15.172 real 0m1.479s 00:03:15.172 user 0m0.604s 00:03:15.172 sys 0m0.833s 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:15.172 14:04:44 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.172 ************************************ 00:03:15.172 END TEST custom_alloc 00:03:15.172 ************************************ 00:03:15.172 14:04:44 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:15.172 14:04:44 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:15.172 14:04:44 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:15.172 14:04:44 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.172 14:04:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.172 ************************************ 00:03:15.172 START TEST no_shrink_alloc 00:03:15.172 ************************************ 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.172 14:04:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.554 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.554 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.554 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.554 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.554 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.554 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.554 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.554 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.554 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.554 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.554 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.554 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.554 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.554 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.554 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.554 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.554 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.554 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46800900 kB' 'MemAvailable: 50267816 kB' 'Buffers: 2704 kB' 'Cached: 9386852 kB' 'SwapCached: 0 kB' 'Active: 6374060 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987748 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478532 kB' 'Mapped: 199000 kB' 'Shmem: 5512348 kB' 'KReclaimable: 167096 kB' 'Slab: 495616 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328520 kB' 'KernelStack: 12784 kB' 'PageTables: 7492 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7061496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.555 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46801360 kB' 'MemAvailable: 50268276 kB' 'Buffers: 2704 kB' 'Cached: 9386864 kB' 'SwapCached: 0 kB' 'Active: 6373776 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987464 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478268 kB' 'Mapped: 198968 kB' 'Shmem: 5512360 kB' 'KReclaimable: 167096 kB' 'Slab: 495616 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328520 kB' 'KernelStack: 12800 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7061880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.556 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.557 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46801776 kB' 'MemAvailable: 50268692 kB' 'Buffers: 2704 kB' 'Cached: 9386880 kB' 'SwapCached: 0 kB' 'Active: 6373712 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987400 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478220 kB' 'Mapped: 198968 kB' 'Shmem: 5512376 kB' 'KReclaimable: 167096 kB' 'Slab: 495700 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328604 kB' 'KernelStack: 12800 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7061904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.558 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.559 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:16.560 nr_hugepages=1024 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:16.560 resv_hugepages=0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:16.560 surplus_hugepages=0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:16.560 anon_hugepages=0 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46801524 kB' 'MemAvailable: 50268440 kB' 'Buffers: 2704 kB' 'Cached: 9386900 kB' 'SwapCached: 0 kB' 'Active: 6373772 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987460 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478256 kB' 'Mapped: 198968 kB' 'Shmem: 5512396 kB' 'KReclaimable: 167096 kB' 'Slab: 495700 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328604 kB' 'KernelStack: 12816 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7061928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.560 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.561 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21528256 kB' 'MemUsed: 11348684 kB' 'SwapCached: 0 kB' 'Active: 4962732 kB' 'Inactive: 3355476 kB' 'Active(anon): 4695600 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171636 kB' 'Mapped: 87940 kB' 'AnonPages: 149700 kB' 'Shmem: 4549028 kB' 'KernelStack: 6696 kB' 'PageTables: 2912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271620 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.562 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.563 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:16.564 node0=1024 expecting 1024 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:16.564 14:04:46 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:17.942 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.942 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:17.942 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.942 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.942 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.942 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.942 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.942 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.942 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.942 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:17.942 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:17.942 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:17.942 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:17.942 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:17.942 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:17.942 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:17.942 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.942 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46794032 kB' 'MemAvailable: 50260948 kB' 'Buffers: 2704 kB' 'Cached: 9386964 kB' 'SwapCached: 0 kB' 'Active: 6375640 kB' 'Inactive: 3490896 kB' 'Active(anon): 5989328 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480088 kB' 'Mapped: 198968 kB' 'Shmem: 5512460 kB' 'KReclaimable: 167096 kB' 'Slab: 495624 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328528 kB' 'KernelStack: 13184 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7064500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196384 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.942 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.943 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46794636 kB' 'MemAvailable: 50261552 kB' 'Buffers: 2704 kB' 'Cached: 9386964 kB' 'SwapCached: 0 kB' 'Active: 6376452 kB' 'Inactive: 3490896 kB' 'Active(anon): 5990140 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480836 kB' 'Mapped: 198984 kB' 'Shmem: 5512460 kB' 'KReclaimable: 167096 kB' 'Slab: 495640 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328544 kB' 'KernelStack: 13216 kB' 'PageTables: 8972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7062156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196240 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.944 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46796944 kB' 'MemAvailable: 50263860 kB' 'Buffers: 2704 kB' 'Cached: 9386988 kB' 'SwapCached: 0 kB' 'Active: 6374048 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987736 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478400 kB' 'Mapped: 198972 kB' 'Shmem: 5512484 kB' 'KReclaimable: 167096 kB' 'Slab: 495672 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328576 kB' 'KernelStack: 12720 kB' 'PageTables: 7332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7062180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.945 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.946 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.207 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:18.208 nr_hugepages=1024 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.208 resv_hugepages=0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.208 surplus_hugepages=0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.208 anon_hugepages=0 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.208 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 46796832 kB' 'MemAvailable: 50263748 kB' 'Buffers: 2704 kB' 'Cached: 9386992 kB' 'SwapCached: 0 kB' 'Active: 6374212 kB' 'Inactive: 3490896 kB' 'Active(anon): 5987900 kB' 'Inactive(anon): 0 kB' 'Active(file): 386312 kB' 'Inactive(file): 3490896 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 478564 kB' 'Mapped: 198972 kB' 'Shmem: 5512488 kB' 'KReclaimable: 167096 kB' 'Slab: 495672 kB' 'SReclaimable: 167096 kB' 'SUnreclaim: 328576 kB' 'KernelStack: 12720 kB' 'PageTables: 7332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7062200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 32064 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1427036 kB' 'DirectMap2M: 12124160 kB' 'DirectMap1G: 55574528 kB' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.209 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.210 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21528524 kB' 'MemUsed: 11348416 kB' 'SwapCached: 0 kB' 'Active: 4963240 kB' 'Inactive: 3355476 kB' 'Active(anon): 4696108 kB' 'Inactive(anon): 0 kB' 'Active(file): 267132 kB' 'Inactive(file): 3355476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8171636 kB' 'Mapped: 87940 kB' 'AnonPages: 150164 kB' 'Shmem: 4549028 kB' 'KernelStack: 6680 kB' 'PageTables: 2924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 82268 kB' 'Slab: 271668 kB' 'SReclaimable: 82268 kB' 'SUnreclaim: 189400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.211 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:18.212 node0=1024 expecting 1024 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:18.212 00:03:18.212 real 0m2.998s 00:03:18.212 user 0m1.244s 00:03:18.212 sys 0m1.681s 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.212 14:04:47 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.212 ************************************ 00:03:18.212 END TEST no_shrink_alloc 00:03:18.212 ************************************ 00:03:18.212 14:04:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:18.212 14:04:47 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:18.212 00:03:18.212 real 0m11.771s 00:03:18.212 user 0m4.534s 00:03:18.212 sys 0m6.134s 00:03:18.212 14:04:47 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:18.212 14:04:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.212 ************************************ 00:03:18.212 END TEST hugepages 00:03:18.212 ************************************ 00:03:18.212 14:04:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:18.212 14:04:47 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:18.212 14:04:47 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:18.212 14:04:47 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:18.212 14:04:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:18.212 ************************************ 00:03:18.212 START TEST driver 00:03:18.212 ************************************ 00:03:18.212 14:04:47 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:18.212 * Looking for test storage... 00:03:18.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:18.212 14:04:47 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:18.212 14:04:47 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:18.212 14:04:47 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.745 14:04:50 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:20.745 14:04:50 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.745 14:04:50 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.745 14:04:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:20.745 ************************************ 00:03:20.745 START TEST guess_driver 00:03:20.745 ************************************ 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:20.745 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:20.745 Looking for driver=vfio-pci 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.745 14:04:50 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:22.122 14:04:51 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.060 14:04:52 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:23.060 14:04:52 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:23.060 14:04:52 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:23.317 14:04:52 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:23.317 14:04:52 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:23.317 14:04:52 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.317 14:04:52 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.851 00:03:25.851 real 0m5.068s 00:03:25.851 user 0m1.173s 00:03:25.851 sys 0m1.949s 00:03:25.851 14:04:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.851 14:04:55 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:25.851 ************************************ 00:03:25.851 END TEST guess_driver 00:03:25.851 ************************************ 00:03:25.851 14:04:55 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:25.851 00:03:25.851 real 0m7.675s 00:03:25.851 user 0m1.695s 00:03:25.851 sys 0m2.975s 00:03:25.851 14:04:55 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:25.851 14:04:55 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:25.851 ************************************ 00:03:25.851 END TEST driver 00:03:25.851 ************************************ 00:03:25.851 14:04:55 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:25.851 14:04:55 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:25.851 14:04:55 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:25.851 14:04:55 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:25.851 14:04:55 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.851 ************************************ 00:03:25.851 START TEST devices 00:03:25.851 ************************************ 00:03:25.851 14:04:55 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:26.108 * Looking for test storage... 00:03:26.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:26.108 14:04:55 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:26.108 14:04:55 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:26.108 14:04:55 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.108 14:04:55 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:27.484 14:04:57 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:27.484 No valid GPT data, bailing 00:03:27.484 14:04:57 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:27.484 14:04:57 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:27.484 14:04:57 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:27.484 14:04:57 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.484 14:04:57 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.484 ************************************ 00:03:27.484 START TEST nvme_mount 00:03:27.484 ************************************ 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:27.484 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:27.485 14:04:57 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:28.862 Creating new GPT entries in memory. 00:03:28.862 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:28.862 other utilities. 00:03:28.862 14:04:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:28.862 14:04:58 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.862 14:04:58 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:28.862 14:04:58 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:28.862 14:04:58 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:29.840 Creating new GPT entries in memory. 00:03:29.840 The operation has completed successfully. 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 781955 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:29.840 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.841 14:04:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:30.778 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:31.037 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:31.037 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:31.295 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:31.295 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:31.295 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:31.295 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:31.295 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:31.555 14:05:00 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:31.556 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.556 14:05:00 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:32.492 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:32.751 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:32.752 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:32.752 14:05:02 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:32.752 14:05:02 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.752 14:05:02 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:34.122 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:34.123 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:34.123 00:03:34.123 real 0m6.466s 00:03:34.123 user 0m1.507s 00:03:34.123 sys 0m2.550s 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.123 14:05:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:34.123 ************************************ 00:03:34.123 END TEST nvme_mount 00:03:34.123 ************************************ 00:03:34.123 14:05:03 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:34.123 14:05:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:34.123 14:05:03 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.123 14:05:03 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.123 14:05:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:34.123 ************************************ 00:03:34.123 START TEST dm_mount 00:03:34.123 ************************************ 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:34.123 14:05:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:35.060 Creating new GPT entries in memory. 00:03:35.060 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:35.060 other utilities. 00:03:35.060 14:05:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:35.060 14:05:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:35.060 14:05:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:35.060 14:05:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:35.060 14:05:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:36.436 Creating new GPT entries in memory. 00:03:36.436 The operation has completed successfully. 00:03:36.436 14:05:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:36.436 14:05:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:36.436 14:05:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:36.436 14:05:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:36.436 14:05:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:37.372 The operation has completed successfully. 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 784344 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.372 14:05:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.748 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.749 14:05:07 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.749 14:05:08 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:39.685 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:39.943 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:39.943 00:03:39.943 real 0m5.907s 00:03:39.943 user 0m1.055s 00:03:39.943 sys 0m1.725s 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:39.943 14:05:09 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:39.943 ************************************ 00:03:39.943 END TEST dm_mount 00:03:39.943 ************************************ 00:03:39.943 14:05:09 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:39.943 14:05:09 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:40.200 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:40.200 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:40.200 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:40.200 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:40.200 14:05:09 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:40.460 00:03:40.460 real 0m14.399s 00:03:40.460 user 0m3.295s 00:03:40.460 sys 0m5.336s 00:03:40.460 14:05:09 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.460 14:05:09 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:40.460 ************************************ 00:03:40.460 END TEST devices 00:03:40.460 ************************************ 00:03:40.460 14:05:09 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.460 00:03:40.460 real 0m44.965s 00:03:40.460 user 0m13.040s 00:03:40.460 sys 0m20.023s 00:03:40.460 14:05:09 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.460 14:05:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.460 ************************************ 00:03:40.460 END TEST setup.sh 00:03:40.460 ************************************ 00:03:40.460 14:05:09 -- common/autotest_common.sh@1142 -- # return 0 00:03:40.460 14:05:09 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:41.839 Hugepages 00:03:41.839 node hugesize free / total 00:03:41.839 node0 1048576kB 0 / 0 00:03:41.839 node0 2048kB 2048 / 2048 00:03:41.839 node1 1048576kB 0 / 0 00:03:41.839 node1 2048kB 0 / 0 00:03:41.839 00:03:41.839 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:41.839 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:41.839 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:41.839 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:41.839 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:41.839 14:05:11 -- spdk/autotest.sh@130 -- # uname -s 00:03:41.839 14:05:11 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:41.839 14:05:11 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:41.839 14:05:11 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:43.216 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:43.216 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:43.216 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:44.155 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.155 14:05:13 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:45.092 14:05:14 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:45.092 14:05:14 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:45.092 14:05:14 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:45.092 14:05:14 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:45.092 14:05:14 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:45.092 14:05:14 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:45.092 14:05:14 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:45.092 14:05:14 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:45.092 14:05:14 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:45.092 14:05:14 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:45.092 14:05:14 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:45.092 14:05:14 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.471 Waiting for block devices as requested 00:03:46.471 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:46.471 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:46.471 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:46.729 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:46.729 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:46.729 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:46.988 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:46.988 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:46.988 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:46.988 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:47.248 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:47.248 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:47.248 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:47.508 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:47.508 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:47.508 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:47.508 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:47.767 14:05:17 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:47.767 14:05:17 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:47.767 14:05:17 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:47.767 14:05:17 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:47.767 14:05:17 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:47.767 14:05:17 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:47.767 14:05:17 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:47.767 14:05:17 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:47.767 14:05:17 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:47.767 14:05:17 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:47.767 14:05:17 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:47.767 14:05:17 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:47.767 14:05:17 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:47.767 14:05:17 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:47.767 14:05:17 -- common/autotest_common.sh@1557 -- # continue 00:03:47.767 14:05:17 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:47.767 14:05:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:47.767 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:03:47.767 14:05:17 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:47.767 14:05:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:47.767 14:05:17 -- common/autotest_common.sh@10 -- # set +x 00:03:47.767 14:05:17 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:49.150 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.150 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:49.150 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:50.088 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:50.346 14:05:19 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:50.346 14:05:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:50.346 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.346 14:05:19 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:50.346 14:05:19 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:50.346 14:05:19 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:50.346 14:05:19 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:50.346 14:05:19 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:50.346 14:05:19 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:50.346 14:05:19 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:50.346 14:05:19 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:50.346 14:05:19 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:50.346 14:05:19 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:50.346 14:05:19 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:50.346 14:05:19 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:50.346 14:05:19 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:50.346 14:05:19 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:50.346 14:05:19 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:50.346 14:05:19 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:50.346 14:05:19 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:50.346 14:05:19 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:50.346 14:05:19 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:50.346 14:05:19 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:50.346 14:05:19 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=789655 00:03:50.346 14:05:19 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:50.346 14:05:19 -- common/autotest_common.sh@1598 -- # waitforlisten 789655 00:03:50.346 14:05:19 -- common/autotest_common.sh@829 -- # '[' -z 789655 ']' 00:03:50.346 14:05:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:50.346 14:05:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:50.346 14:05:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:50.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:50.346 14:05:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:50.346 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:03:50.346 [2024-07-25 14:05:19.879286] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:03:50.346 [2024-07-25 14:05:19.879368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid789655 ] 00:03:50.346 EAL: No free 2048 kB hugepages reported on node 1 00:03:50.346 [2024-07-25 14:05:19.935229] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:50.606 [2024-07-25 14:05:20.045521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:50.866 14:05:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:50.866 14:05:20 -- common/autotest_common.sh@862 -- # return 0 00:03:50.866 14:05:20 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:50.866 14:05:20 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:50.866 14:05:20 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:54.155 nvme0n1 00:03:54.155 14:05:23 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:54.155 [2024-07-25 14:05:23.593237] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:54.155 [2024-07-25 14:05:23.593287] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:54.155 request: 00:03:54.155 { 00:03:54.155 "nvme_ctrlr_name": "nvme0", 00:03:54.155 "password": "test", 00:03:54.155 "method": "bdev_nvme_opal_revert", 00:03:54.155 "req_id": 1 00:03:54.155 } 00:03:54.155 Got JSON-RPC error response 00:03:54.155 response: 00:03:54.155 { 00:03:54.155 "code": -32603, 00:03:54.155 "message": "Internal error" 00:03:54.155 } 00:03:54.155 14:05:23 -- common/autotest_common.sh@1604 -- # true 00:03:54.155 14:05:23 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:54.155 14:05:23 -- common/autotest_common.sh@1608 -- # killprocess 789655 00:03:54.155 14:05:23 -- common/autotest_common.sh@948 -- # '[' -z 789655 ']' 00:03:54.155 14:05:23 -- common/autotest_common.sh@952 -- # kill -0 789655 00:03:54.155 14:05:23 -- common/autotest_common.sh@953 -- # uname 00:03:54.155 14:05:23 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:54.155 14:05:23 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789655 00:03:54.155 14:05:23 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:54.155 14:05:23 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:54.155 14:05:23 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789655' 00:03:54.155 killing process with pid 789655 00:03:54.155 14:05:23 -- common/autotest_common.sh@967 -- # kill 789655 00:03:54.155 14:05:23 -- common/autotest_common.sh@972 -- # wait 789655 00:03:56.060 14:05:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:56.060 14:05:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:56.060 14:05:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.060 14:05:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:56.060 14:05:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:56.060 14:05:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.060 14:05:25 -- common/autotest_common.sh@10 -- # set +x 00:03:56.060 14:05:25 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:56.060 14:05:25 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.060 14:05:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.060 14:05:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.060 14:05:25 -- common/autotest_common.sh@10 -- # set +x 00:03:56.060 ************************************ 00:03:56.060 START TEST env 00:03:56.060 ************************************ 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:56.060 * Looking for test storage... 00:03:56.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:56.060 14:05:25 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.060 14:05:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.060 ************************************ 00:03:56.060 START TEST env_memory 00:03:56.060 ************************************ 00:03:56.060 14:05:25 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:56.060 00:03:56.060 00:03:56.060 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.060 http://cunit.sourceforge.net/ 00:03:56.060 00:03:56.060 00:03:56.060 Suite: memory 00:03:56.060 Test: alloc and free memory map ...[2024-07-25 14:05:25.564814] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:56.060 passed 00:03:56.060 Test: mem map translation ...[2024-07-25 14:05:25.585644] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:56.060 [2024-07-25 14:05:25.585665] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:56.060 [2024-07-25 14:05:25.585721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:56.060 [2024-07-25 14:05:25.585732] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:56.060 passed 00:03:56.060 Test: mem map registration ...[2024-07-25 14:05:25.627399] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:56.060 [2024-07-25 14:05:25.627427] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:56.060 passed 00:03:56.060 Test: mem map adjacent registrations ...passed 00:03:56.060 00:03:56.060 Run Summary: Type Total Ran Passed Failed Inactive 00:03:56.060 suites 1 1 n/a 0 0 00:03:56.060 tests 4 4 4 0 0 00:03:56.060 asserts 152 152 152 0 n/a 00:03:56.060 00:03:56.060 Elapsed time = 0.145 seconds 00:03:56.060 00:03:56.060 real 0m0.154s 00:03:56.060 user 0m0.144s 00:03:56.060 sys 0m0.010s 00:03:56.060 14:05:25 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.060 14:05:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:56.060 ************************************ 00:03:56.060 END TEST env_memory 00:03:56.060 ************************************ 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1142 -- # return 0 00:03:56.060 14:05:25 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.060 14:05:25 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.060 14:05:25 env -- common/autotest_common.sh@10 -- # set +x 00:03:56.320 ************************************ 00:03:56.320 START TEST env_vtophys 00:03:56.320 ************************************ 00:03:56.320 14:05:25 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:56.320 EAL: lib.eal log level changed from notice to debug 00:03:56.320 EAL: Detected lcore 0 as core 0 on socket 0 00:03:56.320 EAL: Detected lcore 1 as core 1 on socket 0 00:03:56.320 EAL: Detected lcore 2 as core 2 on socket 0 00:03:56.320 EAL: Detected lcore 3 as core 3 on socket 0 00:03:56.320 EAL: Detected lcore 4 as core 4 on socket 0 00:03:56.320 EAL: Detected lcore 5 as core 5 on socket 0 00:03:56.320 EAL: Detected lcore 6 as core 8 on socket 0 00:03:56.320 EAL: Detected lcore 7 as core 9 on socket 0 00:03:56.320 EAL: Detected lcore 8 as core 10 on socket 0 00:03:56.320 EAL: Detected lcore 9 as core 11 on socket 0 00:03:56.320 EAL: Detected lcore 10 as core 12 on socket 0 00:03:56.320 EAL: Detected lcore 11 as core 13 on socket 0 00:03:56.320 EAL: Detected lcore 12 as core 0 on socket 1 00:03:56.320 EAL: Detected lcore 13 as core 1 on socket 1 00:03:56.320 EAL: Detected lcore 14 as core 2 on socket 1 00:03:56.320 EAL: Detected lcore 15 as core 3 on socket 1 00:03:56.320 EAL: Detected lcore 16 as core 4 on socket 1 00:03:56.320 EAL: Detected lcore 17 as core 5 on socket 1 00:03:56.320 EAL: Detected lcore 18 as core 8 on socket 1 00:03:56.320 EAL: Detected lcore 19 as core 9 on socket 1 00:03:56.320 EAL: Detected lcore 20 as core 10 on socket 1 00:03:56.320 EAL: Detected lcore 21 as core 11 on socket 1 00:03:56.320 EAL: Detected lcore 22 as core 12 on socket 1 00:03:56.320 EAL: Detected lcore 23 as core 13 on socket 1 00:03:56.320 EAL: Detected lcore 24 as core 0 on socket 0 00:03:56.320 EAL: Detected lcore 25 as core 1 on socket 0 00:03:56.320 EAL: Detected lcore 26 as core 2 on socket 0 00:03:56.320 EAL: Detected lcore 27 as core 3 on socket 0 00:03:56.320 EAL: Detected lcore 28 as core 4 on socket 0 00:03:56.320 EAL: Detected lcore 29 as core 5 on socket 0 00:03:56.320 EAL: Detected lcore 30 as core 8 on socket 0 00:03:56.320 EAL: Detected lcore 31 as core 9 on socket 0 00:03:56.320 EAL: Detected lcore 32 as core 10 on socket 0 00:03:56.320 EAL: Detected lcore 33 as core 11 on socket 0 00:03:56.320 EAL: Detected lcore 34 as core 12 on socket 0 00:03:56.320 EAL: Detected lcore 35 as core 13 on socket 0 00:03:56.320 EAL: Detected lcore 36 as core 0 on socket 1 00:03:56.320 EAL: Detected lcore 37 as core 1 on socket 1 00:03:56.320 EAL: Detected lcore 38 as core 2 on socket 1 00:03:56.320 EAL: Detected lcore 39 as core 3 on socket 1 00:03:56.320 EAL: Detected lcore 40 as core 4 on socket 1 00:03:56.320 EAL: Detected lcore 41 as core 5 on socket 1 00:03:56.320 EAL: Detected lcore 42 as core 8 on socket 1 00:03:56.320 EAL: Detected lcore 43 as core 9 on socket 1 00:03:56.320 EAL: Detected lcore 44 as core 10 on socket 1 00:03:56.320 EAL: Detected lcore 45 as core 11 on socket 1 00:03:56.320 EAL: Detected lcore 46 as core 12 on socket 1 00:03:56.320 EAL: Detected lcore 47 as core 13 on socket 1 00:03:56.320 EAL: Maximum logical cores by configuration: 128 00:03:56.320 EAL: Detected CPU lcores: 48 00:03:56.320 EAL: Detected NUMA nodes: 2 00:03:56.320 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:56.320 EAL: Detected shared linkage of DPDK 00:03:56.321 EAL: No shared files mode enabled, IPC will be disabled 00:03:56.321 EAL: Bus pci wants IOVA as 'DC' 00:03:56.321 EAL: Buses did not request a specific IOVA mode. 00:03:56.321 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:56.321 EAL: Selected IOVA mode 'VA' 00:03:56.321 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.321 EAL: Probing VFIO support... 00:03:56.321 EAL: IOMMU type 1 (Type 1) is supported 00:03:56.321 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:56.321 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:56.321 EAL: VFIO support initialized 00:03:56.321 EAL: Ask a virtual area of 0x2e000 bytes 00:03:56.321 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:56.321 EAL: Setting up physically contiguous memory... 00:03:56.321 EAL: Setting maximum number of open files to 524288 00:03:56.321 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:56.321 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:56.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:56.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:56.321 EAL: Ask a virtual area of 0x61000 bytes 00:03:56.321 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:56.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:56.321 EAL: Ask a virtual area of 0x400000000 bytes 00:03:56.321 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:56.321 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:56.321 EAL: Hugepages will be freed exactly as allocated. 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: TSC frequency is ~2700000 KHz 00:03:56.321 EAL: Main lcore 0 is ready (tid=7f2a42838a00;cpuset=[0]) 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 0 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 2MB 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:56.321 EAL: Mem event callback 'spdk:(nil)' registered 00:03:56.321 00:03:56.321 00:03:56.321 CUnit - A unit testing framework for C - Version 2.1-3 00:03:56.321 http://cunit.sourceforge.net/ 00:03:56.321 00:03:56.321 00:03:56.321 Suite: components_suite 00:03:56.321 Test: vtophys_malloc_test ...passed 00:03:56.321 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 4MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 4MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 6MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 6MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 10MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 10MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 18MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 18MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 34MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 34MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 66MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 66MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.321 EAL: Restoring previous memory policy: 4 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was expanded by 130MB 00:03:56.321 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.321 EAL: request: mp_malloc_sync 00:03:56.321 EAL: No shared files mode enabled, IPC is disabled 00:03:56.321 EAL: Heap on socket 0 was shrunk by 130MB 00:03:56.321 EAL: Trying to obtain current memory policy. 00:03:56.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.582 EAL: Restoring previous memory policy: 4 00:03:56.582 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.582 EAL: request: mp_malloc_sync 00:03:56.582 EAL: No shared files mode enabled, IPC is disabled 00:03:56.582 EAL: Heap on socket 0 was expanded by 258MB 00:03:56.582 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.582 EAL: request: mp_malloc_sync 00:03:56.582 EAL: No shared files mode enabled, IPC is disabled 00:03:56.582 EAL: Heap on socket 0 was shrunk by 258MB 00:03:56.582 EAL: Trying to obtain current memory policy. 00:03:56.582 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:56.843 EAL: Restoring previous memory policy: 4 00:03:56.843 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.843 EAL: request: mp_malloc_sync 00:03:56.843 EAL: No shared files mode enabled, IPC is disabled 00:03:56.843 EAL: Heap on socket 0 was expanded by 514MB 00:03:56.843 EAL: Calling mem event callback 'spdk:(nil)' 00:03:56.843 EAL: request: mp_malloc_sync 00:03:56.843 EAL: No shared files mode enabled, IPC is disabled 00:03:56.843 EAL: Heap on socket 0 was shrunk by 514MB 00:03:56.843 EAL: Trying to obtain current memory policy. 00:03:56.843 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:57.101 EAL: Restoring previous memory policy: 4 00:03:57.101 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.101 EAL: request: mp_malloc_sync 00:03:57.101 EAL: No shared files mode enabled, IPC is disabled 00:03:57.101 EAL: Heap on socket 0 was expanded by 1026MB 00:03:57.360 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.620 EAL: request: mp_malloc_sync 00:03:57.620 EAL: No shared files mode enabled, IPC is disabled 00:03:57.620 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:57.620 passed 00:03:57.620 00:03:57.620 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.620 suites 1 1 n/a 0 0 00:03:57.620 tests 2 2 2 0 0 00:03:57.620 asserts 497 497 497 0 n/a 00:03:57.620 00:03:57.620 Elapsed time = 1.315 seconds 00:03:57.620 EAL: Calling mem event callback 'spdk:(nil)' 00:03:57.620 EAL: request: mp_malloc_sync 00:03:57.620 EAL: No shared files mode enabled, IPC is disabled 00:03:57.620 EAL: Heap on socket 0 was shrunk by 2MB 00:03:57.620 EAL: No shared files mode enabled, IPC is disabled 00:03:57.620 EAL: No shared files mode enabled, IPC is disabled 00:03:57.620 EAL: No shared files mode enabled, IPC is disabled 00:03:57.621 00:03:57.621 real 0m1.427s 00:03:57.621 user 0m0.836s 00:03:57.621 sys 0m0.556s 00:03:57.621 14:05:27 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.621 14:05:27 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:57.621 ************************************ 00:03:57.621 END TEST env_vtophys 00:03:57.621 ************************************ 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1142 -- # return 0 00:03:57.621 14:05:27 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.621 14:05:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.621 ************************************ 00:03:57.621 START TEST env_pci 00:03:57.621 ************************************ 00:03:57.621 14:05:27 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:57.621 00:03:57.621 00:03:57.621 CUnit - A unit testing framework for C - Version 2.1-3 00:03:57.621 http://cunit.sourceforge.net/ 00:03:57.621 00:03:57.621 00:03:57.621 Suite: pci 00:03:57.621 Test: pci_hook ...[2024-07-25 14:05:27.217373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 790545 has claimed it 00:03:57.621 EAL: Cannot find device (10000:00:01.0) 00:03:57.621 EAL: Failed to attach device on primary process 00:03:57.621 passed 00:03:57.621 00:03:57.621 Run Summary: Type Total Ran Passed Failed Inactive 00:03:57.621 suites 1 1 n/a 0 0 00:03:57.621 tests 1 1 1 0 0 00:03:57.621 asserts 25 25 25 0 n/a 00:03:57.621 00:03:57.621 Elapsed time = 0.021 seconds 00:03:57.621 00:03:57.621 real 0m0.034s 00:03:57.621 user 0m0.009s 00:03:57.621 sys 0m0.025s 00:03:57.621 14:05:27 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.621 14:05:27 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:57.621 ************************************ 00:03:57.621 END TEST env_pci 00:03:57.621 ************************************ 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1142 -- # return 0 00:03:57.621 14:05:27 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:57.621 14:05:27 env -- env/env.sh@15 -- # uname 00:03:57.621 14:05:27 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:57.621 14:05:27 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:57.621 14:05:27 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:57.621 14:05:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.621 14:05:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:57.881 ************************************ 00:03:57.881 START TEST env_dpdk_post_init 00:03:57.881 ************************************ 00:03:57.881 14:05:27 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:57.881 EAL: Detected CPU lcores: 48 00:03:57.881 EAL: Detected NUMA nodes: 2 00:03:57.881 EAL: Detected shared linkage of DPDK 00:03:57.881 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:57.881 EAL: Selected IOVA mode 'VA' 00:03:57.881 EAL: No free 2048 kB hugepages reported on node 1 00:03:57.881 EAL: VFIO support initialized 00:03:57.881 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:57.881 EAL: Using IOMMU type 1 (Type 1) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:57.881 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:58.140 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:58.140 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:58.140 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:58.140 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:58.709 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:02.004 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:02.004 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:02.262 Starting DPDK initialization... 00:04:02.262 Starting SPDK post initialization... 00:04:02.262 SPDK NVMe probe 00:04:02.262 Attaching to 0000:88:00.0 00:04:02.262 Attached to 0000:88:00.0 00:04:02.262 Cleaning up... 00:04:02.262 00:04:02.262 real 0m4.394s 00:04:02.262 user 0m3.278s 00:04:02.262 sys 0m0.178s 00:04:02.262 14:05:31 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.262 14:05:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:02.262 ************************************ 00:04:02.262 END TEST env_dpdk_post_init 00:04:02.262 ************************************ 00:04:02.262 14:05:31 env -- common/autotest_common.sh@1142 -- # return 0 00:04:02.262 14:05:31 env -- env/env.sh@26 -- # uname 00:04:02.262 14:05:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:02.262 14:05:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.262 14:05:31 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.262 14:05:31 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.262 14:05:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.262 ************************************ 00:04:02.262 START TEST env_mem_callbacks 00:04:02.262 ************************************ 00:04:02.262 14:05:31 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:02.262 EAL: Detected CPU lcores: 48 00:04:02.262 EAL: Detected NUMA nodes: 2 00:04:02.262 EAL: Detected shared linkage of DPDK 00:04:02.262 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:02.262 EAL: Selected IOVA mode 'VA' 00:04:02.262 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.262 EAL: VFIO support initialized 00:04:02.262 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.262 00:04:02.262 00:04:02.262 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.262 http://cunit.sourceforge.net/ 00:04:02.262 00:04:02.262 00:04:02.262 Suite: memory 00:04:02.262 Test: test ... 00:04:02.262 register 0x200000200000 2097152 00:04:02.262 malloc 3145728 00:04:02.262 register 0x200000400000 4194304 00:04:02.262 buf 0x200000500000 len 3145728 PASSED 00:04:02.262 malloc 64 00:04:02.262 buf 0x2000004fff40 len 64 PASSED 00:04:02.262 malloc 4194304 00:04:02.262 register 0x200000800000 6291456 00:04:02.262 buf 0x200000a00000 len 4194304 PASSED 00:04:02.262 free 0x200000500000 3145728 00:04:02.262 free 0x2000004fff40 64 00:04:02.262 unregister 0x200000400000 4194304 PASSED 00:04:02.262 free 0x200000a00000 4194304 00:04:02.262 unregister 0x200000800000 6291456 PASSED 00:04:02.262 malloc 8388608 00:04:02.262 register 0x200000400000 10485760 00:04:02.262 buf 0x200000600000 len 8388608 PASSED 00:04:02.262 free 0x200000600000 8388608 00:04:02.262 unregister 0x200000400000 10485760 PASSED 00:04:02.262 passed 00:04:02.262 00:04:02.262 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.262 suites 1 1 n/a 0 0 00:04:02.262 tests 1 1 1 0 0 00:04:02.262 asserts 15 15 15 0 n/a 00:04:02.262 00:04:02.262 Elapsed time = 0.005 seconds 00:04:02.262 00:04:02.262 real 0m0.047s 00:04:02.262 user 0m0.012s 00:04:02.262 sys 0m0.035s 00:04:02.263 14:05:31 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.263 14:05:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:02.263 ************************************ 00:04:02.263 END TEST env_mem_callbacks 00:04:02.263 ************************************ 00:04:02.263 14:05:31 env -- common/autotest_common.sh@1142 -- # return 0 00:04:02.263 00:04:02.263 real 0m6.357s 00:04:02.263 user 0m4.413s 00:04:02.263 sys 0m0.989s 00:04:02.263 14:05:31 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:02.263 14:05:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.263 ************************************ 00:04:02.263 END TEST env 00:04:02.263 ************************************ 00:04:02.263 14:05:31 -- common/autotest_common.sh@1142 -- # return 0 00:04:02.263 14:05:31 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.263 14:05:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.263 14:05:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.263 14:05:31 -- common/autotest_common.sh@10 -- # set +x 00:04:02.263 ************************************ 00:04:02.263 START TEST rpc 00:04:02.263 ************************************ 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:02.263 * Looking for test storage... 00:04:02.263 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.263 14:05:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=791260 00:04:02.263 14:05:31 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:02.263 14:05:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:02.263 14:05:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 791260 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@829 -- # '[' -z 791260 ']' 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:02.263 14:05:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.523 [2024-07-25 14:05:31.954814] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:02.523 [2024-07-25 14:05:31.954914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791260 ] 00:04:02.523 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.523 [2024-07-25 14:05:32.010943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.523 [2024-07-25 14:05:32.116285] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:02.523 [2024-07-25 14:05:32.116344] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 791260' to capture a snapshot of events at runtime. 00:04:02.523 [2024-07-25 14:05:32.116357] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:02.523 [2024-07-25 14:05:32.116368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:02.523 [2024-07-25 14:05:32.116378] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid791260 for offline analysis/debug. 00:04:02.523 [2024-07-25 14:05:32.116412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.782 14:05:32 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.782 14:05:32 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:02.782 14:05:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.782 14:05:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:02.782 14:05:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:02.782 14:05:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:02.782 14:05:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:02.782 14:05:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:02.782 14:05:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:02.782 ************************************ 00:04:02.782 START TEST rpc_integrity 00:04:02.782 ************************************ 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:02.782 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:02.782 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.042 { 00:04:03.042 "name": "Malloc0", 00:04:03.042 "aliases": [ 00:04:03.042 "46404572-2eaf-46ff-9304-01a1725a7876" 00:04:03.042 ], 00:04:03.042 "product_name": "Malloc disk", 00:04:03.042 "block_size": 512, 00:04:03.042 "num_blocks": 16384, 00:04:03.042 "uuid": "46404572-2eaf-46ff-9304-01a1725a7876", 00:04:03.042 "assigned_rate_limits": { 00:04:03.042 "rw_ios_per_sec": 0, 00:04:03.042 "rw_mbytes_per_sec": 0, 00:04:03.042 "r_mbytes_per_sec": 0, 00:04:03.042 "w_mbytes_per_sec": 0 00:04:03.042 }, 00:04:03.042 "claimed": false, 00:04:03.042 "zoned": false, 00:04:03.042 "supported_io_types": { 00:04:03.042 "read": true, 00:04:03.042 "write": true, 00:04:03.042 "unmap": true, 00:04:03.042 "flush": true, 00:04:03.042 "reset": true, 00:04:03.042 "nvme_admin": false, 00:04:03.042 "nvme_io": false, 00:04:03.042 "nvme_io_md": false, 00:04:03.042 "write_zeroes": true, 00:04:03.042 "zcopy": true, 00:04:03.042 "get_zone_info": false, 00:04:03.042 "zone_management": false, 00:04:03.042 "zone_append": false, 00:04:03.042 "compare": false, 00:04:03.042 "compare_and_write": false, 00:04:03.042 "abort": true, 00:04:03.042 "seek_hole": false, 00:04:03.042 "seek_data": false, 00:04:03.042 "copy": true, 00:04:03.042 "nvme_iov_md": false 00:04:03.042 }, 00:04:03.042 "memory_domains": [ 00:04:03.042 { 00:04:03.042 "dma_device_id": "system", 00:04:03.042 "dma_device_type": 1 00:04:03.042 }, 00:04:03.042 { 00:04:03.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.042 "dma_device_type": 2 00:04:03.042 } 00:04:03.042 ], 00:04:03.042 "driver_specific": {} 00:04:03.042 } 00:04:03.042 ]' 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.042 [2024-07-25 14:05:32.480852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:03.042 [2024-07-25 14:05:32.480902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.042 [2024-07-25 14:05:32.480924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd46d50 00:04:03.042 [2024-07-25 14:05:32.480941] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.042 [2024-07-25 14:05:32.482276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.042 [2024-07-25 14:05:32.482300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.042 Passthru0 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.042 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.042 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.042 { 00:04:03.042 "name": "Malloc0", 00:04:03.042 "aliases": [ 00:04:03.042 "46404572-2eaf-46ff-9304-01a1725a7876" 00:04:03.042 ], 00:04:03.042 "product_name": "Malloc disk", 00:04:03.042 "block_size": 512, 00:04:03.042 "num_blocks": 16384, 00:04:03.042 "uuid": "46404572-2eaf-46ff-9304-01a1725a7876", 00:04:03.042 "assigned_rate_limits": { 00:04:03.042 "rw_ios_per_sec": 0, 00:04:03.042 "rw_mbytes_per_sec": 0, 00:04:03.042 "r_mbytes_per_sec": 0, 00:04:03.042 "w_mbytes_per_sec": 0 00:04:03.042 }, 00:04:03.042 "claimed": true, 00:04:03.042 "claim_type": "exclusive_write", 00:04:03.042 "zoned": false, 00:04:03.042 "supported_io_types": { 00:04:03.042 "read": true, 00:04:03.042 "write": true, 00:04:03.042 "unmap": true, 00:04:03.042 "flush": true, 00:04:03.042 "reset": true, 00:04:03.042 "nvme_admin": false, 00:04:03.042 "nvme_io": false, 00:04:03.042 "nvme_io_md": false, 00:04:03.042 "write_zeroes": true, 00:04:03.042 "zcopy": true, 00:04:03.042 "get_zone_info": false, 00:04:03.042 "zone_management": false, 00:04:03.042 "zone_append": false, 00:04:03.042 "compare": false, 00:04:03.042 "compare_and_write": false, 00:04:03.042 "abort": true, 00:04:03.042 "seek_hole": false, 00:04:03.042 "seek_data": false, 00:04:03.042 "copy": true, 00:04:03.042 "nvme_iov_md": false 00:04:03.042 }, 00:04:03.042 "memory_domains": [ 00:04:03.042 { 00:04:03.042 "dma_device_id": "system", 00:04:03.042 "dma_device_type": 1 00:04:03.042 }, 00:04:03.042 { 00:04:03.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.043 "dma_device_type": 2 00:04:03.043 } 00:04:03.043 ], 00:04:03.043 "driver_specific": {} 00:04:03.043 }, 00:04:03.043 { 00:04:03.043 "name": "Passthru0", 00:04:03.043 "aliases": [ 00:04:03.043 "77572b60-947f-5aae-b69f-5545a32caef2" 00:04:03.043 ], 00:04:03.043 "product_name": "passthru", 00:04:03.043 "block_size": 512, 00:04:03.043 "num_blocks": 16384, 00:04:03.043 "uuid": "77572b60-947f-5aae-b69f-5545a32caef2", 00:04:03.043 "assigned_rate_limits": { 00:04:03.043 "rw_ios_per_sec": 0, 00:04:03.043 "rw_mbytes_per_sec": 0, 00:04:03.043 "r_mbytes_per_sec": 0, 00:04:03.043 "w_mbytes_per_sec": 0 00:04:03.043 }, 00:04:03.043 "claimed": false, 00:04:03.043 "zoned": false, 00:04:03.043 "supported_io_types": { 00:04:03.043 "read": true, 00:04:03.043 "write": true, 00:04:03.043 "unmap": true, 00:04:03.043 "flush": true, 00:04:03.043 "reset": true, 00:04:03.043 "nvme_admin": false, 00:04:03.043 "nvme_io": false, 00:04:03.043 "nvme_io_md": false, 00:04:03.043 "write_zeroes": true, 00:04:03.043 "zcopy": true, 00:04:03.043 "get_zone_info": false, 00:04:03.043 "zone_management": false, 00:04:03.043 "zone_append": false, 00:04:03.043 "compare": false, 00:04:03.043 "compare_and_write": false, 00:04:03.043 "abort": true, 00:04:03.043 "seek_hole": false, 00:04:03.043 "seek_data": false, 00:04:03.043 "copy": true, 00:04:03.043 "nvme_iov_md": false 00:04:03.043 }, 00:04:03.043 "memory_domains": [ 00:04:03.043 { 00:04:03.043 "dma_device_id": "system", 00:04:03.043 "dma_device_type": 1 00:04:03.043 }, 00:04:03.043 { 00:04:03.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.043 "dma_device_type": 2 00:04:03.043 } 00:04:03.043 ], 00:04:03.043 "driver_specific": { 00:04:03.043 "passthru": { 00:04:03.043 "name": "Passthru0", 00:04:03.043 "base_bdev_name": "Malloc0" 00:04:03.043 } 00:04:03.043 } 00:04:03.043 } 00:04:03.043 ]' 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.043 14:05:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.043 00:04:03.043 real 0m0.211s 00:04:03.043 user 0m0.132s 00:04:03.043 sys 0m0.023s 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 ************************************ 00:04:03.043 END TEST rpc_integrity 00:04:03.043 ************************************ 00:04:03.043 14:05:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.043 14:05:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:03.043 14:05:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.043 14:05:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.043 14:05:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 ************************************ 00:04:03.043 START TEST rpc_plugins 00:04:03.043 ************************************ 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:03.043 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.043 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:03.043 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.043 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.043 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:03.043 { 00:04:03.043 "name": "Malloc1", 00:04:03.043 "aliases": [ 00:04:03.043 "36fb734e-b513-46c7-953f-e09e731ee4fc" 00:04:03.043 ], 00:04:03.043 "product_name": "Malloc disk", 00:04:03.043 "block_size": 4096, 00:04:03.043 "num_blocks": 256, 00:04:03.043 "uuid": "36fb734e-b513-46c7-953f-e09e731ee4fc", 00:04:03.043 "assigned_rate_limits": { 00:04:03.043 "rw_ios_per_sec": 0, 00:04:03.043 "rw_mbytes_per_sec": 0, 00:04:03.043 "r_mbytes_per_sec": 0, 00:04:03.043 "w_mbytes_per_sec": 0 00:04:03.043 }, 00:04:03.043 "claimed": false, 00:04:03.043 "zoned": false, 00:04:03.043 "supported_io_types": { 00:04:03.043 "read": true, 00:04:03.043 "write": true, 00:04:03.043 "unmap": true, 00:04:03.043 "flush": true, 00:04:03.043 "reset": true, 00:04:03.043 "nvme_admin": false, 00:04:03.043 "nvme_io": false, 00:04:03.043 "nvme_io_md": false, 00:04:03.043 "write_zeroes": true, 00:04:03.043 "zcopy": true, 00:04:03.043 "get_zone_info": false, 00:04:03.043 "zone_management": false, 00:04:03.043 "zone_append": false, 00:04:03.043 "compare": false, 00:04:03.043 "compare_and_write": false, 00:04:03.043 "abort": true, 00:04:03.043 "seek_hole": false, 00:04:03.043 "seek_data": false, 00:04:03.043 "copy": true, 00:04:03.043 "nvme_iov_md": false 00:04:03.043 }, 00:04:03.043 "memory_domains": [ 00:04:03.043 { 00:04:03.043 "dma_device_id": "system", 00:04:03.043 "dma_device_type": 1 00:04:03.043 }, 00:04:03.043 { 00:04:03.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.043 "dma_device_type": 2 00:04:03.043 } 00:04:03.043 ], 00:04:03.043 "driver_specific": {} 00:04:03.043 } 00:04:03.043 ]' 00:04:03.043 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:03.302 14:05:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:03.302 00:04:03.302 real 0m0.104s 00:04:03.302 user 0m0.066s 00:04:03.302 sys 0m0.011s 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.302 14:05:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 ************************************ 00:04:03.302 END TEST rpc_plugins 00:04:03.302 ************************************ 00:04:03.302 14:05:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.302 14:05:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:03.302 14:05:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.302 14:05:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.302 14:05:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 ************************************ 00:04:03.302 START TEST rpc_trace_cmd_test 00:04:03.302 ************************************ 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:03.302 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid791260", 00:04:03.302 "tpoint_group_mask": "0x8", 00:04:03.302 "iscsi_conn": { 00:04:03.302 "mask": "0x2", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "scsi": { 00:04:03.302 "mask": "0x4", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "bdev": { 00:04:03.302 "mask": "0x8", 00:04:03.302 "tpoint_mask": "0xffffffffffffffff" 00:04:03.302 }, 00:04:03.302 "nvmf_rdma": { 00:04:03.302 "mask": "0x10", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "nvmf_tcp": { 00:04:03.302 "mask": "0x20", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "ftl": { 00:04:03.302 "mask": "0x40", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "blobfs": { 00:04:03.302 "mask": "0x80", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "dsa": { 00:04:03.302 "mask": "0x200", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "thread": { 00:04:03.302 "mask": "0x400", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "nvme_pcie": { 00:04:03.302 "mask": "0x800", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "iaa": { 00:04:03.302 "mask": "0x1000", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "nvme_tcp": { 00:04:03.302 "mask": "0x2000", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "bdev_nvme": { 00:04:03.302 "mask": "0x4000", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 }, 00:04:03.302 "sock": { 00:04:03.302 "mask": "0x8000", 00:04:03.302 "tpoint_mask": "0x0" 00:04:03.302 } 00:04:03.302 }' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:03.302 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:03.562 14:05:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:03.562 00:04:03.562 real 0m0.179s 00:04:03.562 user 0m0.155s 00:04:03.562 sys 0m0.016s 00:04:03.562 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.562 14:05:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 ************************************ 00:04:03.562 END TEST rpc_trace_cmd_test 00:04:03.562 ************************************ 00:04:03.562 14:05:32 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.562 14:05:32 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:03.562 14:05:32 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:03.562 14:05:32 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:03.562 14:05:32 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.562 14:05:32 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.562 14:05:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 ************************************ 00:04:03.562 START TEST rpc_daemon_integrity 00:04:03.562 ************************************ 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:03.562 { 00:04:03.562 "name": "Malloc2", 00:04:03.562 "aliases": [ 00:04:03.562 "b58a3011-7532-4f7a-88cf-1455a3bf123c" 00:04:03.562 ], 00:04:03.562 "product_name": "Malloc disk", 00:04:03.562 "block_size": 512, 00:04:03.562 "num_blocks": 16384, 00:04:03.562 "uuid": "b58a3011-7532-4f7a-88cf-1455a3bf123c", 00:04:03.562 "assigned_rate_limits": { 00:04:03.562 "rw_ios_per_sec": 0, 00:04:03.562 "rw_mbytes_per_sec": 0, 00:04:03.562 "r_mbytes_per_sec": 0, 00:04:03.562 "w_mbytes_per_sec": 0 00:04:03.562 }, 00:04:03.562 "claimed": false, 00:04:03.562 "zoned": false, 00:04:03.562 "supported_io_types": { 00:04:03.562 "read": true, 00:04:03.562 "write": true, 00:04:03.562 "unmap": true, 00:04:03.562 "flush": true, 00:04:03.562 "reset": true, 00:04:03.562 "nvme_admin": false, 00:04:03.562 "nvme_io": false, 00:04:03.562 "nvme_io_md": false, 00:04:03.562 "write_zeroes": true, 00:04:03.562 "zcopy": true, 00:04:03.562 "get_zone_info": false, 00:04:03.562 "zone_management": false, 00:04:03.562 "zone_append": false, 00:04:03.562 "compare": false, 00:04:03.562 "compare_and_write": false, 00:04:03.562 "abort": true, 00:04:03.562 "seek_hole": false, 00:04:03.562 "seek_data": false, 00:04:03.562 "copy": true, 00:04:03.562 "nvme_iov_md": false 00:04:03.562 }, 00:04:03.562 "memory_domains": [ 00:04:03.562 { 00:04:03.562 "dma_device_id": "system", 00:04:03.562 "dma_device_type": 1 00:04:03.562 }, 00:04:03.562 { 00:04:03.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.562 "dma_device_type": 2 00:04:03.562 } 00:04:03.562 ], 00:04:03.562 "driver_specific": {} 00:04:03.562 } 00:04:03.562 ]' 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.562 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.562 [2024-07-25 14:05:33.114662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:03.562 [2024-07-25 14:05:33.114697] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:03.562 [2024-07-25 14:05:33.114737] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd46980 00:04:03.562 [2024-07-25 14:05:33.114750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:03.563 [2024-07-25 14:05:33.115913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:03.563 [2024-07-25 14:05:33.115935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:03.563 Passthru0 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:03.563 { 00:04:03.563 "name": "Malloc2", 00:04:03.563 "aliases": [ 00:04:03.563 "b58a3011-7532-4f7a-88cf-1455a3bf123c" 00:04:03.563 ], 00:04:03.563 "product_name": "Malloc disk", 00:04:03.563 "block_size": 512, 00:04:03.563 "num_blocks": 16384, 00:04:03.563 "uuid": "b58a3011-7532-4f7a-88cf-1455a3bf123c", 00:04:03.563 "assigned_rate_limits": { 00:04:03.563 "rw_ios_per_sec": 0, 00:04:03.563 "rw_mbytes_per_sec": 0, 00:04:03.563 "r_mbytes_per_sec": 0, 00:04:03.563 "w_mbytes_per_sec": 0 00:04:03.563 }, 00:04:03.563 "claimed": true, 00:04:03.563 "claim_type": "exclusive_write", 00:04:03.563 "zoned": false, 00:04:03.563 "supported_io_types": { 00:04:03.563 "read": true, 00:04:03.563 "write": true, 00:04:03.563 "unmap": true, 00:04:03.563 "flush": true, 00:04:03.563 "reset": true, 00:04:03.563 "nvme_admin": false, 00:04:03.563 "nvme_io": false, 00:04:03.563 "nvme_io_md": false, 00:04:03.563 "write_zeroes": true, 00:04:03.563 "zcopy": true, 00:04:03.563 "get_zone_info": false, 00:04:03.563 "zone_management": false, 00:04:03.563 "zone_append": false, 00:04:03.563 "compare": false, 00:04:03.563 "compare_and_write": false, 00:04:03.563 "abort": true, 00:04:03.563 "seek_hole": false, 00:04:03.563 "seek_data": false, 00:04:03.563 "copy": true, 00:04:03.563 "nvme_iov_md": false 00:04:03.563 }, 00:04:03.563 "memory_domains": [ 00:04:03.563 { 00:04:03.563 "dma_device_id": "system", 00:04:03.563 "dma_device_type": 1 00:04:03.563 }, 00:04:03.563 { 00:04:03.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.563 "dma_device_type": 2 00:04:03.563 } 00:04:03.563 ], 00:04:03.563 "driver_specific": {} 00:04:03.563 }, 00:04:03.563 { 00:04:03.563 "name": "Passthru0", 00:04:03.563 "aliases": [ 00:04:03.563 "1a057cb8-2d3a-5986-bd16-1abac7263d6c" 00:04:03.563 ], 00:04:03.563 "product_name": "passthru", 00:04:03.563 "block_size": 512, 00:04:03.563 "num_blocks": 16384, 00:04:03.563 "uuid": "1a057cb8-2d3a-5986-bd16-1abac7263d6c", 00:04:03.563 "assigned_rate_limits": { 00:04:03.563 "rw_ios_per_sec": 0, 00:04:03.563 "rw_mbytes_per_sec": 0, 00:04:03.563 "r_mbytes_per_sec": 0, 00:04:03.563 "w_mbytes_per_sec": 0 00:04:03.563 }, 00:04:03.563 "claimed": false, 00:04:03.563 "zoned": false, 00:04:03.563 "supported_io_types": { 00:04:03.563 "read": true, 00:04:03.563 "write": true, 00:04:03.563 "unmap": true, 00:04:03.563 "flush": true, 00:04:03.563 "reset": true, 00:04:03.563 "nvme_admin": false, 00:04:03.563 "nvme_io": false, 00:04:03.563 "nvme_io_md": false, 00:04:03.563 "write_zeroes": true, 00:04:03.563 "zcopy": true, 00:04:03.563 "get_zone_info": false, 00:04:03.563 "zone_management": false, 00:04:03.563 "zone_append": false, 00:04:03.563 "compare": false, 00:04:03.563 "compare_and_write": false, 00:04:03.563 "abort": true, 00:04:03.563 "seek_hole": false, 00:04:03.563 "seek_data": false, 00:04:03.563 "copy": true, 00:04:03.563 "nvme_iov_md": false 00:04:03.563 }, 00:04:03.563 "memory_domains": [ 00:04:03.563 { 00:04:03.563 "dma_device_id": "system", 00:04:03.563 "dma_device_type": 1 00:04:03.563 }, 00:04:03.563 { 00:04:03.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:03.563 "dma_device_type": 2 00:04:03.563 } 00:04:03.563 ], 00:04:03.563 "driver_specific": { 00:04:03.563 "passthru": { 00:04:03.563 "name": "Passthru0", 00:04:03.563 "base_bdev_name": "Malloc2" 00:04:03.563 } 00:04:03.563 } 00:04:03.563 } 00:04:03.563 ]' 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:03.563 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:03.823 14:05:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:03.823 00:04:03.823 real 0m0.211s 00:04:03.823 user 0m0.138s 00:04:03.823 sys 0m0.017s 00:04:03.823 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.823 14:05:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:03.823 ************************************ 00:04:03.823 END TEST rpc_daemon_integrity 00:04:03.823 ************************************ 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:03.823 14:05:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:03.823 14:05:33 rpc -- rpc/rpc.sh@84 -- # killprocess 791260 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@948 -- # '[' -z 791260 ']' 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@952 -- # kill -0 791260 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@953 -- # uname 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 791260 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 791260' 00:04:03.823 killing process with pid 791260 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@967 -- # kill 791260 00:04:03.823 14:05:33 rpc -- common/autotest_common.sh@972 -- # wait 791260 00:04:04.081 00:04:04.081 real 0m1.848s 00:04:04.081 user 0m2.283s 00:04:04.081 sys 0m0.567s 00:04:04.081 14:05:33 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.081 14:05:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.081 ************************************ 00:04:04.081 END TEST rpc 00:04:04.081 ************************************ 00:04:04.081 14:05:33 -- common/autotest_common.sh@1142 -- # return 0 00:04:04.081 14:05:33 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:04.081 14:05:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.081 14:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.081 14:05:33 -- common/autotest_common.sh@10 -- # set +x 00:04:04.338 ************************************ 00:04:04.338 START TEST skip_rpc 00:04:04.338 ************************************ 00:04:04.339 14:05:33 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:04.339 * Looking for test storage... 00:04:04.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:04.339 14:05:33 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:04.339 14:05:33 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:04.339 14:05:33 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:04.339 14:05:33 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.339 14:05:33 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.339 14:05:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.339 ************************************ 00:04:04.339 START TEST skip_rpc 00:04:04.339 ************************************ 00:04:04.339 14:05:33 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:04.339 14:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=791633 00:04:04.339 14:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:04.339 14:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.339 14:05:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:04.339 [2024-07-25 14:05:33.883628] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:04.339 [2024-07-25 14:05:33.883702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid791633 ] 00:04:04.339 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.339 [2024-07-25 14:05:33.938362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.597 [2024-07-25 14:05:34.040285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 791633 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 791633 ']' 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 791633 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 791633 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 791633' 00:04:09.874 killing process with pid 791633 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 791633 00:04:09.874 14:05:38 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 791633 00:04:09.874 00:04:09.874 real 0m5.460s 00:04:09.874 user 0m5.168s 00:04:09.874 sys 0m0.284s 00:04:09.875 14:05:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.875 14:05:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.875 ************************************ 00:04:09.875 END TEST skip_rpc 00:04:09.875 ************************************ 00:04:09.875 14:05:39 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:09.875 14:05:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:09.875 14:05:39 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.875 14:05:39 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.875 14:05:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.875 ************************************ 00:04:09.875 START TEST skip_rpc_with_json 00:04:09.875 ************************************ 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=792326 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 792326 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 792326 ']' 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:09.875 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:09.875 [2024-07-25 14:05:39.396622] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:09.875 [2024-07-25 14:05:39.396730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid792326 ] 00:04:09.875 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.875 [2024-07-25 14:05:39.453283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.155 [2024-07-25 14:05:39.562648] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.421 [2024-07-25 14:05:39.805141] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:10.421 request: 00:04:10.421 { 00:04:10.421 "trtype": "tcp", 00:04:10.421 "method": "nvmf_get_transports", 00:04:10.421 "req_id": 1 00:04:10.421 } 00:04:10.421 Got JSON-RPC error response 00:04:10.421 response: 00:04:10.421 { 00:04:10.421 "code": -19, 00:04:10.421 "message": "No such device" 00:04:10.421 } 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.421 [2024-07-25 14:05:39.813254] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.421 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.421 { 00:04:10.421 "subsystems": [ 00:04:10.421 { 00:04:10.421 "subsystem": "vfio_user_target", 00:04:10.421 "config": null 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "keyring", 00:04:10.421 "config": [] 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "iobuf", 00:04:10.421 "config": [ 00:04:10.421 { 00:04:10.421 "method": "iobuf_set_options", 00:04:10.421 "params": { 00:04:10.421 "small_pool_count": 8192, 00:04:10.421 "large_pool_count": 1024, 00:04:10.421 "small_bufsize": 8192, 00:04:10.421 "large_bufsize": 135168 00:04:10.421 } 00:04:10.421 } 00:04:10.421 ] 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "sock", 00:04:10.421 "config": [ 00:04:10.421 { 00:04:10.421 "method": "sock_set_default_impl", 00:04:10.421 "params": { 00:04:10.421 "impl_name": "posix" 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "sock_impl_set_options", 00:04:10.421 "params": { 00:04:10.421 "impl_name": "ssl", 00:04:10.421 "recv_buf_size": 4096, 00:04:10.421 "send_buf_size": 4096, 00:04:10.421 "enable_recv_pipe": true, 00:04:10.421 "enable_quickack": false, 00:04:10.421 "enable_placement_id": 0, 00:04:10.421 "enable_zerocopy_send_server": true, 00:04:10.421 "enable_zerocopy_send_client": false, 00:04:10.421 "zerocopy_threshold": 0, 00:04:10.421 "tls_version": 0, 00:04:10.421 "enable_ktls": false 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "sock_impl_set_options", 00:04:10.421 "params": { 00:04:10.421 "impl_name": "posix", 00:04:10.421 "recv_buf_size": 2097152, 00:04:10.421 "send_buf_size": 2097152, 00:04:10.421 "enable_recv_pipe": true, 00:04:10.421 "enable_quickack": false, 00:04:10.421 "enable_placement_id": 0, 00:04:10.421 "enable_zerocopy_send_server": true, 00:04:10.421 "enable_zerocopy_send_client": false, 00:04:10.421 "zerocopy_threshold": 0, 00:04:10.421 "tls_version": 0, 00:04:10.421 "enable_ktls": false 00:04:10.421 } 00:04:10.421 } 00:04:10.421 ] 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "vmd", 00:04:10.421 "config": [] 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "accel", 00:04:10.421 "config": [ 00:04:10.421 { 00:04:10.421 "method": "accel_set_options", 00:04:10.421 "params": { 00:04:10.421 "small_cache_size": 128, 00:04:10.421 "large_cache_size": 16, 00:04:10.421 "task_count": 2048, 00:04:10.421 "sequence_count": 2048, 00:04:10.421 "buf_count": 2048 00:04:10.421 } 00:04:10.421 } 00:04:10.421 ] 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "subsystem": "bdev", 00:04:10.421 "config": [ 00:04:10.421 { 00:04:10.421 "method": "bdev_set_options", 00:04:10.421 "params": { 00:04:10.421 "bdev_io_pool_size": 65535, 00:04:10.421 "bdev_io_cache_size": 256, 00:04:10.421 "bdev_auto_examine": true, 00:04:10.421 "iobuf_small_cache_size": 128, 00:04:10.421 "iobuf_large_cache_size": 16 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "bdev_raid_set_options", 00:04:10.421 "params": { 00:04:10.421 "process_window_size_kb": 1024, 00:04:10.421 "process_max_bandwidth_mb_sec": 0 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "bdev_iscsi_set_options", 00:04:10.421 "params": { 00:04:10.421 "timeout_sec": 30 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "bdev_nvme_set_options", 00:04:10.421 "params": { 00:04:10.421 "action_on_timeout": "none", 00:04:10.421 "timeout_us": 0, 00:04:10.421 "timeout_admin_us": 0, 00:04:10.421 "keep_alive_timeout_ms": 10000, 00:04:10.421 "arbitration_burst": 0, 00:04:10.421 "low_priority_weight": 0, 00:04:10.421 "medium_priority_weight": 0, 00:04:10.421 "high_priority_weight": 0, 00:04:10.421 "nvme_adminq_poll_period_us": 10000, 00:04:10.421 "nvme_ioq_poll_period_us": 0, 00:04:10.421 "io_queue_requests": 0, 00:04:10.421 "delay_cmd_submit": true, 00:04:10.421 "transport_retry_count": 4, 00:04:10.421 "bdev_retry_count": 3, 00:04:10.421 "transport_ack_timeout": 0, 00:04:10.421 "ctrlr_loss_timeout_sec": 0, 00:04:10.421 "reconnect_delay_sec": 0, 00:04:10.421 "fast_io_fail_timeout_sec": 0, 00:04:10.421 "disable_auto_failback": false, 00:04:10.421 "generate_uuids": false, 00:04:10.421 "transport_tos": 0, 00:04:10.421 "nvme_error_stat": false, 00:04:10.421 "rdma_srq_size": 0, 00:04:10.421 "io_path_stat": false, 00:04:10.421 "allow_accel_sequence": false, 00:04:10.421 "rdma_max_cq_size": 0, 00:04:10.421 "rdma_cm_event_timeout_ms": 0, 00:04:10.421 "dhchap_digests": [ 00:04:10.421 "sha256", 00:04:10.421 "sha384", 00:04:10.421 "sha512" 00:04:10.421 ], 00:04:10.421 "dhchap_dhgroups": [ 00:04:10.421 "null", 00:04:10.421 "ffdhe2048", 00:04:10.421 "ffdhe3072", 00:04:10.421 "ffdhe4096", 00:04:10.421 "ffdhe6144", 00:04:10.421 "ffdhe8192" 00:04:10.421 ] 00:04:10.421 } 00:04:10.421 }, 00:04:10.421 { 00:04:10.421 "method": "bdev_nvme_set_hotplug", 00:04:10.421 "params": { 00:04:10.422 "period_us": 100000, 00:04:10.422 "enable": false 00:04:10.422 } 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "method": "bdev_wait_for_examine" 00:04:10.422 } 00:04:10.422 ] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "scsi", 00:04:10.422 "config": null 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "scheduler", 00:04:10.422 "config": [ 00:04:10.422 { 00:04:10.422 "method": "framework_set_scheduler", 00:04:10.422 "params": { 00:04:10.422 "name": "static" 00:04:10.422 } 00:04:10.422 } 00:04:10.422 ] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "vhost_scsi", 00:04:10.422 "config": [] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "vhost_blk", 00:04:10.422 "config": [] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "ublk", 00:04:10.422 "config": [] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "nbd", 00:04:10.422 "config": [] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "nvmf", 00:04:10.422 "config": [ 00:04:10.422 { 00:04:10.422 "method": "nvmf_set_config", 00:04:10.422 "params": { 00:04:10.422 "discovery_filter": "match_any", 00:04:10.422 "admin_cmd_passthru": { 00:04:10.422 "identify_ctrlr": false 00:04:10.422 } 00:04:10.422 } 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "method": "nvmf_set_max_subsystems", 00:04:10.422 "params": { 00:04:10.422 "max_subsystems": 1024 00:04:10.422 } 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "method": "nvmf_set_crdt", 00:04:10.422 "params": { 00:04:10.422 "crdt1": 0, 00:04:10.422 "crdt2": 0, 00:04:10.422 "crdt3": 0 00:04:10.422 } 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "method": "nvmf_create_transport", 00:04:10.422 "params": { 00:04:10.422 "trtype": "TCP", 00:04:10.422 "max_queue_depth": 128, 00:04:10.422 "max_io_qpairs_per_ctrlr": 127, 00:04:10.422 "in_capsule_data_size": 4096, 00:04:10.422 "max_io_size": 131072, 00:04:10.422 "io_unit_size": 131072, 00:04:10.422 "max_aq_depth": 128, 00:04:10.422 "num_shared_buffers": 511, 00:04:10.422 "buf_cache_size": 4294967295, 00:04:10.422 "dif_insert_or_strip": false, 00:04:10.422 "zcopy": false, 00:04:10.422 "c2h_success": true, 00:04:10.422 "sock_priority": 0, 00:04:10.422 "abort_timeout_sec": 1, 00:04:10.422 "ack_timeout": 0, 00:04:10.422 "data_wr_pool_size": 0 00:04:10.422 } 00:04:10.422 } 00:04:10.422 ] 00:04:10.422 }, 00:04:10.422 { 00:04:10.422 "subsystem": "iscsi", 00:04:10.422 "config": [ 00:04:10.422 { 00:04:10.422 "method": "iscsi_set_options", 00:04:10.422 "params": { 00:04:10.422 "node_base": "iqn.2016-06.io.spdk", 00:04:10.422 "max_sessions": 128, 00:04:10.422 "max_connections_per_session": 2, 00:04:10.422 "max_queue_depth": 64, 00:04:10.422 "default_time2wait": 2, 00:04:10.422 "default_time2retain": 20, 00:04:10.422 "first_burst_length": 8192, 00:04:10.422 "immediate_data": true, 00:04:10.422 "allow_duplicated_isid": false, 00:04:10.422 "error_recovery_level": 0, 00:04:10.422 "nop_timeout": 60, 00:04:10.422 "nop_in_interval": 30, 00:04:10.422 "disable_chap": false, 00:04:10.422 "require_chap": false, 00:04:10.422 "mutual_chap": false, 00:04:10.422 "chap_group": 0, 00:04:10.422 "max_large_datain_per_connection": 64, 00:04:10.422 "max_r2t_per_connection": 4, 00:04:10.422 "pdu_pool_size": 36864, 00:04:10.422 "immediate_data_pool_size": 16384, 00:04:10.422 "data_out_pool_size": 2048 00:04:10.422 } 00:04:10.422 } 00:04:10.422 ] 00:04:10.422 } 00:04:10.422 ] 00:04:10.422 } 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 792326 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 792326 ']' 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 792326 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792326 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792326' 00:04:10.422 killing process with pid 792326 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 792326 00:04:10.422 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 792326 00:04:10.990 14:05:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=792467 00:04:10.990 14:05:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.990 14:05:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 792467 ']' 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792467' 00:04:16.267 killing process with pid 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 792467 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:16.267 00:04:16.267 real 0m6.551s 00:04:16.267 user 0m6.186s 00:04:16.267 sys 0m0.641s 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.267 14:05:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.267 ************************************ 00:04:16.267 END TEST skip_rpc_with_json 00:04:16.267 ************************************ 00:04:16.267 14:05:45 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.267 14:05:45 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:16.267 14:05:45 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.267 14:05:45 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.267 14:05:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.528 ************************************ 00:04:16.528 START TEST skip_rpc_with_delay 00:04:16.528 ************************************ 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:16.528 14:05:45 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:16.528 [2024-07-25 14:05:45.996423] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:16.528 [2024-07-25 14:05:45.996536] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:16.528 00:04:16.528 real 0m0.067s 00:04:16.528 user 0m0.045s 00:04:16.528 sys 0m0.022s 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.528 14:05:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:16.528 ************************************ 00:04:16.528 END TEST skip_rpc_with_delay 00:04:16.528 ************************************ 00:04:16.528 14:05:46 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:16.528 14:05:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:16.528 14:05:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:16.528 14:05:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:16.528 14:05:46 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:16.528 14:05:46 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.528 14:05:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.528 ************************************ 00:04:16.528 START TEST exit_on_failed_rpc_init 00:04:16.528 ************************************ 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=793181 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 793181 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 793181 ']' 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:16.528 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:16.528 [2024-07-25 14:05:46.106223] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:16.528 [2024-07-25 14:05:46.106324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793181 ] 00:04:16.528 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.528 [2024-07-25 14:05:46.163098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.787 [2024-07-25 14:05:46.274334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:17.045 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:17.045 [2024-07-25 14:05:46.566430] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:17.045 [2024-07-25 14:05:46.566531] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793197 ] 00:04:17.045 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.045 [2024-07-25 14:05:46.623854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.304 [2024-07-25 14:05:46.732116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:17.304 [2024-07-25 14:05:46.732250] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:17.304 [2024-07-25 14:05:46.732269] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:17.304 [2024-07-25 14:05:46.732281] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 793181 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 793181 ']' 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 793181 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 793181 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 793181' 00:04:17.304 killing process with pid 793181 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 793181 00:04:17.304 14:05:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 793181 00:04:17.869 00:04:17.869 real 0m1.240s 00:04:17.869 user 0m1.401s 00:04:17.869 sys 0m0.429s 00:04:17.869 14:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.869 14:05:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.869 ************************************ 00:04:17.869 END TEST exit_on_failed_rpc_init 00:04:17.869 ************************************ 00:04:17.869 14:05:47 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:17.869 14:05:47 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.869 00:04:17.869 real 0m13.577s 00:04:17.869 user 0m12.913s 00:04:17.869 sys 0m1.538s 00:04:17.869 14:05:47 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.869 14:05:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.869 ************************************ 00:04:17.869 END TEST skip_rpc 00:04:17.869 ************************************ 00:04:17.869 14:05:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.869 14:05:47 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.869 14:05:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.869 14:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.870 14:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 START TEST rpc_client 00:04:17.870 ************************************ 00:04:17.870 14:05:47 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:17.870 * Looking for test storage... 00:04:17.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:17.870 14:05:47 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:17.870 OK 00:04:17.870 14:05:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:17.870 00:04:17.870 real 0m0.070s 00:04:17.870 user 0m0.030s 00:04:17.870 sys 0m0.045s 00:04:17.870 14:05:47 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.870 14:05:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 END TEST rpc_client 00:04:17.870 ************************************ 00:04:17.870 14:05:47 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.870 14:05:47 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:17.870 14:05:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.870 14:05:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.870 14:05:47 -- common/autotest_common.sh@10 -- # set +x 00:04:17.870 ************************************ 00:04:17.870 START TEST json_config 00:04:17.870 ************************************ 00:04:17.870 14:05:47 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:18.127 14:05:47 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:18.127 14:05:47 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:18.127 14:05:47 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:18.127 14:05:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.127 14:05:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.127 14:05:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.127 14:05:47 json_config -- paths/export.sh@5 -- # export PATH 00:04:18.127 14:05:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@47 -- # : 0 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:18.127 14:05:47 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:18.127 INFO: JSON configuration test init 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:18.127 14:05:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.127 14:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:18.127 14:05:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.127 14:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.127 14:05:47 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:18.127 14:05:47 json_config -- json_config/common.sh@9 -- # local app=target 00:04:18.127 14:05:47 json_config -- json_config/common.sh@10 -- # shift 00:04:18.127 14:05:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:18.127 14:05:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:18.127 14:05:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:18.127 14:05:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.127 14:05:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:18.128 14:05:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=793441 00:04:18.128 14:05:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:18.128 14:05:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:18.128 Waiting for target to run... 00:04:18.128 14:05:47 json_config -- json_config/common.sh@25 -- # waitforlisten 793441 /var/tmp/spdk_tgt.sock 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@829 -- # '[' -z 793441 ']' 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:18.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.128 14:05:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.128 [2024-07-25 14:05:47.603290] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:18.128 [2024-07-25 14:05:47.603389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid793441 ] 00:04:18.128 EAL: No free 2048 kB hugepages reported on node 1 00:04:18.694 [2024-07-25 14:05:48.110993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.694 [2024-07-25 14:05:48.204404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:18.952 14:05:48 json_config -- json_config/common.sh@26 -- # echo '' 00:04:18.952 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.952 14:05:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:18.952 14:05:48 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:18.952 14:05:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:22.234 14:05:51 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.234 14:05:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:22.234 14:05:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:22.234 14:05:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@51 -- # sort 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:22.492 14:05:51 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:22.492 14:05:51 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:22.492 14:05:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:22.492 14:05:52 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:22.492 14:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:22.492 14:05:52 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.492 14:05:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:22.749 MallocForNvmf0 00:04:22.749 14:05:52 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:22.749 14:05:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:23.006 MallocForNvmf1 00:04:23.006 14:05:52 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.006 14:05:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:23.263 [2024-07-25 14:05:52.730019] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.263 14:05:52 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.263 14:05:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:23.521 14:05:52 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.521 14:05:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:23.778 14:05:53 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:23.778 14:05:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:24.035 14:05:53 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.035 14:05:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:24.293 [2024-07-25 14:05:53.705125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:24.293 14:05:53 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:24.293 14:05:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.293 14:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.293 14:05:53 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:24.293 14:05:53 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.293 14:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.293 14:05:53 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:24.293 14:05:53 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.293 14:05:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:24.550 MallocBdevForConfigChangeCheck 00:04:24.550 14:05:54 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:24.550 14:05:54 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.550 14:05:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.550 14:05:54 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:24.550 14:05:54 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.807 14:05:54 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:24.807 INFO: shutting down applications... 00:04:24.807 14:05:54 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:24.807 14:05:54 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:24.807 14:05:54 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:24.808 14:05:54 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:26.716 Calling clear_iscsi_subsystem 00:04:26.716 Calling clear_nvmf_subsystem 00:04:26.716 Calling clear_nbd_subsystem 00:04:26.716 Calling clear_ublk_subsystem 00:04:26.716 Calling clear_vhost_blk_subsystem 00:04:26.716 Calling clear_vhost_scsi_subsystem 00:04:26.716 Calling clear_bdev_subsystem 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:26.716 14:05:56 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:26.976 14:05:56 json_config -- json_config/json_config.sh@349 -- # break 00:04:26.976 14:05:56 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:26.976 14:05:56 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:26.976 14:05:56 json_config -- json_config/common.sh@31 -- # local app=target 00:04:26.976 14:05:56 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:26.976 14:05:56 json_config -- json_config/common.sh@35 -- # [[ -n 793441 ]] 00:04:26.976 14:05:56 json_config -- json_config/common.sh@38 -- # kill -SIGINT 793441 00:04:26.976 14:05:56 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:26.976 14:05:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:26.976 14:05:56 json_config -- json_config/common.sh@41 -- # kill -0 793441 00:04:26.976 14:05:56 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:27.546 14:05:56 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:27.546 14:05:56 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:27.546 14:05:56 json_config -- json_config/common.sh@41 -- # kill -0 793441 00:04:27.546 14:05:56 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:27.546 14:05:56 json_config -- json_config/common.sh@43 -- # break 00:04:27.546 14:05:56 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:27.546 14:05:56 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:27.546 SPDK target shutdown done 00:04:27.546 14:05:56 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:27.546 INFO: relaunching applications... 00:04:27.546 14:05:56 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.546 14:05:56 json_config -- json_config/common.sh@9 -- # local app=target 00:04:27.546 14:05:56 json_config -- json_config/common.sh@10 -- # shift 00:04:27.546 14:05:56 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:27.546 14:05:56 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:27.546 14:05:56 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:27.546 14:05:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.546 14:05:56 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:27.546 14:05:56 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=794748 00:04:27.546 14:05:56 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.546 14:05:56 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:27.546 Waiting for target to run... 00:04:27.546 14:05:56 json_config -- json_config/common.sh@25 -- # waitforlisten 794748 /var/tmp/spdk_tgt.sock 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@829 -- # '[' -z 794748 ']' 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:27.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:27.546 14:05:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.546 [2024-07-25 14:05:56.972973] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:27.546 [2024-07-25 14:05:56.973069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid794748 ] 00:04:27.546 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.112 [2024-07-25 14:05:57.492820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.112 [2024-07-25 14:05:57.586400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.404 [2024-07-25 14:06:00.619111] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:31.404 [2024-07-25 14:06:00.651626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:31.972 14:06:01 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.972 14:06:01 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:31.972 14:06:01 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.972 00:04:31.972 14:06:01 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:31.972 14:06:01 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:31.972 INFO: Checking if target configuration is the same... 00:04:31.972 14:06:01 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.972 14:06:01 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:31.972 14:06:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.972 + '[' 2 -ne 2 ']' 00:04:31.972 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.972 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.972 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.972 +++ basename /dev/fd/62 00:04:31.972 ++ mktemp /tmp/62.XXX 00:04:31.972 + tmp_file_1=/tmp/62.eV0 00:04:31.972 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.972 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.972 + tmp_file_2=/tmp/spdk_tgt_config.json.GYm 00:04:31.972 + ret=0 00:04:31.972 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.262 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:32.262 + diff -u /tmp/62.eV0 /tmp/spdk_tgt_config.json.GYm 00:04:32.263 + echo 'INFO: JSON config files are the same' 00:04:32.263 INFO: JSON config files are the same 00:04:32.263 + rm /tmp/62.eV0 /tmp/spdk_tgt_config.json.GYm 00:04:32.263 + exit 0 00:04:32.263 14:06:01 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:32.263 14:06:01 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:32.263 INFO: changing configuration and checking if this can be detected... 00:04:32.263 14:06:01 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.263 14:06:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:32.521 14:06:02 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.521 14:06:02 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:32.521 14:06:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.521 + '[' 2 -ne 2 ']' 00:04:32.521 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:32.521 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:32.521 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:32.521 +++ basename /dev/fd/62 00:04:32.521 ++ mktemp /tmp/62.XXX 00:04:32.521 + tmp_file_1=/tmp/62.GG3 00:04:32.521 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.521 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:32.521 + tmp_file_2=/tmp/spdk_tgt_config.json.llW 00:04:32.521 + ret=0 00:04:32.521 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.092 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:33.092 + diff -u /tmp/62.GG3 /tmp/spdk_tgt_config.json.llW 00:04:33.092 + ret=1 00:04:33.092 + echo '=== Start of file: /tmp/62.GG3 ===' 00:04:33.092 + cat /tmp/62.GG3 00:04:33.092 + echo '=== End of file: /tmp/62.GG3 ===' 00:04:33.092 + echo '' 00:04:33.092 + echo '=== Start of file: /tmp/spdk_tgt_config.json.llW ===' 00:04:33.092 + cat /tmp/spdk_tgt_config.json.llW 00:04:33.092 + echo '=== End of file: /tmp/spdk_tgt_config.json.llW ===' 00:04:33.092 + echo '' 00:04:33.092 + rm /tmp/62.GG3 /tmp/spdk_tgt_config.json.llW 00:04:33.092 + exit 1 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:33.092 INFO: configuration change detected. 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@321 -- # [[ -n 794748 ]] 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.092 14:06:02 json_config -- json_config/json_config.sh@327 -- # killprocess 794748 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@948 -- # '[' -z 794748 ']' 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@952 -- # kill -0 794748 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@953 -- # uname 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794748 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794748' 00:04:33.092 killing process with pid 794748 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@967 -- # kill 794748 00:04:33.092 14:06:02 json_config -- common/autotest_common.sh@972 -- # wait 794748 00:04:34.998 14:06:04 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:34.998 14:06:04 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:34.998 14:06:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:34.998 14:06:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.998 14:06:04 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:34.998 14:06:04 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:34.998 INFO: Success 00:04:34.998 00:04:34.998 real 0m16.812s 00:04:34.998 user 0m18.588s 00:04:34.998 sys 0m2.280s 00:04:34.999 14:06:04 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.999 14:06:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.999 ************************************ 00:04:34.999 END TEST json_config 00:04:34.999 ************************************ 00:04:34.999 14:06:04 -- common/autotest_common.sh@1142 -- # return 0 00:04:34.999 14:06:04 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.999 14:06:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.999 14:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.999 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:04:34.999 ************************************ 00:04:34.999 START TEST json_config_extra_key 00:04:34.999 ************************************ 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:34.999 14:06:04 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:34.999 14:06:04 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:34.999 14:06:04 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:34.999 14:06:04 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.999 14:06:04 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.999 14:06:04 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.999 14:06:04 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:34.999 14:06:04 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:34.999 14:06:04 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:34.999 INFO: launching applications... 00:04:34.999 14:06:04 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=795785 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:34.999 Waiting for target to run... 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:34.999 14:06:04 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 795785 /var/tmp/spdk_tgt.sock 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 795785 ']' 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:34.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.999 14:06:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:34.999 [2024-07-25 14:06:04.459697] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:34.999 [2024-07-25 14:06:04.459778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid795785 ] 00:04:34.999 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.567 [2024-07-25 14:06:04.973211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.567 [2024-07-25 14:06:05.070817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.827 14:06:05 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.827 14:06:05 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:35.827 00:04:35.827 14:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:35.827 INFO: shutting down applications... 00:04:35.827 14:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 795785 ]] 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 795785 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 795785 00:04:35.827 14:06:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 795785 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:36.394 14:06:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:36.394 SPDK target shutdown done 00:04:36.394 14:06:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:36.394 Success 00:04:36.394 00:04:36.394 real 0m1.561s 00:04:36.394 user 0m1.398s 00:04:36.394 sys 0m0.603s 00:04:36.394 14:06:05 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.394 14:06:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.394 ************************************ 00:04:36.394 END TEST json_config_extra_key 00:04:36.394 ************************************ 00:04:36.394 14:06:05 -- common/autotest_common.sh@1142 -- # return 0 00:04:36.394 14:06:05 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.394 14:06:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.394 14:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.394 14:06:05 -- common/autotest_common.sh@10 -- # set +x 00:04:36.394 ************************************ 00:04:36.394 START TEST alias_rpc 00:04:36.394 ************************************ 00:04:36.394 14:06:05 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:36.394 * Looking for test storage... 00:04:36.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:36.394 14:06:06 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:36.394 14:06:06 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=796096 00:04:36.394 14:06:06 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.394 14:06:06 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 796096 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 796096 ']' 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.394 14:06:06 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.654 [2024-07-25 14:06:06.075053] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:36.654 [2024-07-25 14:06:06.075182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796096 ] 00:04:36.654 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.654 [2024-07-25 14:06:06.132622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.654 [2024-07-25 14:06:06.239979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.591 14:06:07 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.591 14:06:07 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:37.591 14:06:07 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:37.849 14:06:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 796096 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 796096 ']' 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 796096 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796096 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796096' 00:04:37.850 killing process with pid 796096 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@967 -- # kill 796096 00:04:37.850 14:06:07 alias_rpc -- common/autotest_common.sh@972 -- # wait 796096 00:04:38.108 00:04:38.108 real 0m1.765s 00:04:38.108 user 0m2.063s 00:04:38.108 sys 0m0.435s 00:04:38.108 14:06:07 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.108 14:06:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.108 ************************************ 00:04:38.108 END TEST alias_rpc 00:04:38.108 ************************************ 00:04:38.108 14:06:07 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.108 14:06:07 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:38.108 14:06:07 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.108 14:06:07 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.108 14:06:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.108 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:04:38.367 ************************************ 00:04:38.367 START TEST spdkcli_tcp 00:04:38.367 ************************************ 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:38.367 * Looking for test storage... 00:04:38.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=796512 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:38.367 14:06:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 796512 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 796512 ']' 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.367 14:06:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:38.367 [2024-07-25 14:06:07.892230] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:38.367 [2024-07-25 14:06:07.892319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid796512 ] 00:04:38.367 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.367 [2024-07-25 14:06:07.953549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:38.625 [2024-07-25 14:06:08.065004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.625 [2024-07-25 14:06:08.065008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.884 14:06:08 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.884 14:06:08 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:38.884 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=796772 00:04:38.884 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:38.884 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:39.143 [ 00:04:39.143 "bdev_malloc_delete", 00:04:39.143 "bdev_malloc_create", 00:04:39.143 "bdev_null_resize", 00:04:39.143 "bdev_null_delete", 00:04:39.143 "bdev_null_create", 00:04:39.143 "bdev_nvme_cuse_unregister", 00:04:39.143 "bdev_nvme_cuse_register", 00:04:39.143 "bdev_opal_new_user", 00:04:39.143 "bdev_opal_set_lock_state", 00:04:39.143 "bdev_opal_delete", 00:04:39.143 "bdev_opal_get_info", 00:04:39.143 "bdev_opal_create", 00:04:39.143 "bdev_nvme_opal_revert", 00:04:39.143 "bdev_nvme_opal_init", 00:04:39.143 "bdev_nvme_send_cmd", 00:04:39.143 "bdev_nvme_get_path_iostat", 00:04:39.143 "bdev_nvme_get_mdns_discovery_info", 00:04:39.143 "bdev_nvme_stop_mdns_discovery", 00:04:39.143 "bdev_nvme_start_mdns_discovery", 00:04:39.143 "bdev_nvme_set_multipath_policy", 00:04:39.143 "bdev_nvme_set_preferred_path", 00:04:39.143 "bdev_nvme_get_io_paths", 00:04:39.143 "bdev_nvme_remove_error_injection", 00:04:39.143 "bdev_nvme_add_error_injection", 00:04:39.143 "bdev_nvme_get_discovery_info", 00:04:39.143 "bdev_nvme_stop_discovery", 00:04:39.143 "bdev_nvme_start_discovery", 00:04:39.143 "bdev_nvme_get_controller_health_info", 00:04:39.143 "bdev_nvme_disable_controller", 00:04:39.143 "bdev_nvme_enable_controller", 00:04:39.143 "bdev_nvme_reset_controller", 00:04:39.143 "bdev_nvme_get_transport_statistics", 00:04:39.143 "bdev_nvme_apply_firmware", 00:04:39.143 "bdev_nvme_detach_controller", 00:04:39.143 "bdev_nvme_get_controllers", 00:04:39.143 "bdev_nvme_attach_controller", 00:04:39.143 "bdev_nvme_set_hotplug", 00:04:39.143 "bdev_nvme_set_options", 00:04:39.143 "bdev_passthru_delete", 00:04:39.143 "bdev_passthru_create", 00:04:39.143 "bdev_lvol_set_parent_bdev", 00:04:39.143 "bdev_lvol_set_parent", 00:04:39.143 "bdev_lvol_check_shallow_copy", 00:04:39.143 "bdev_lvol_start_shallow_copy", 00:04:39.143 "bdev_lvol_grow_lvstore", 00:04:39.143 "bdev_lvol_get_lvols", 00:04:39.143 "bdev_lvol_get_lvstores", 00:04:39.143 "bdev_lvol_delete", 00:04:39.143 "bdev_lvol_set_read_only", 00:04:39.143 "bdev_lvol_resize", 00:04:39.143 "bdev_lvol_decouple_parent", 00:04:39.143 "bdev_lvol_inflate", 00:04:39.143 "bdev_lvol_rename", 00:04:39.143 "bdev_lvol_clone_bdev", 00:04:39.143 "bdev_lvol_clone", 00:04:39.143 "bdev_lvol_snapshot", 00:04:39.143 "bdev_lvol_create", 00:04:39.143 "bdev_lvol_delete_lvstore", 00:04:39.143 "bdev_lvol_rename_lvstore", 00:04:39.143 "bdev_lvol_create_lvstore", 00:04:39.143 "bdev_raid_set_options", 00:04:39.143 "bdev_raid_remove_base_bdev", 00:04:39.143 "bdev_raid_add_base_bdev", 00:04:39.143 "bdev_raid_delete", 00:04:39.143 "bdev_raid_create", 00:04:39.143 "bdev_raid_get_bdevs", 00:04:39.143 "bdev_error_inject_error", 00:04:39.143 "bdev_error_delete", 00:04:39.143 "bdev_error_create", 00:04:39.143 "bdev_split_delete", 00:04:39.143 "bdev_split_create", 00:04:39.143 "bdev_delay_delete", 00:04:39.143 "bdev_delay_create", 00:04:39.143 "bdev_delay_update_latency", 00:04:39.143 "bdev_zone_block_delete", 00:04:39.143 "bdev_zone_block_create", 00:04:39.143 "blobfs_create", 00:04:39.143 "blobfs_detect", 00:04:39.143 "blobfs_set_cache_size", 00:04:39.143 "bdev_aio_delete", 00:04:39.143 "bdev_aio_rescan", 00:04:39.143 "bdev_aio_create", 00:04:39.143 "bdev_ftl_set_property", 00:04:39.143 "bdev_ftl_get_properties", 00:04:39.143 "bdev_ftl_get_stats", 00:04:39.143 "bdev_ftl_unmap", 00:04:39.143 "bdev_ftl_unload", 00:04:39.143 "bdev_ftl_delete", 00:04:39.143 "bdev_ftl_load", 00:04:39.143 "bdev_ftl_create", 00:04:39.143 "bdev_virtio_attach_controller", 00:04:39.143 "bdev_virtio_scsi_get_devices", 00:04:39.143 "bdev_virtio_detach_controller", 00:04:39.143 "bdev_virtio_blk_set_hotplug", 00:04:39.143 "bdev_iscsi_delete", 00:04:39.143 "bdev_iscsi_create", 00:04:39.143 "bdev_iscsi_set_options", 00:04:39.143 "accel_error_inject_error", 00:04:39.143 "ioat_scan_accel_module", 00:04:39.143 "dsa_scan_accel_module", 00:04:39.143 "iaa_scan_accel_module", 00:04:39.143 "vfu_virtio_create_scsi_endpoint", 00:04:39.143 "vfu_virtio_scsi_remove_target", 00:04:39.143 "vfu_virtio_scsi_add_target", 00:04:39.144 "vfu_virtio_create_blk_endpoint", 00:04:39.144 "vfu_virtio_delete_endpoint", 00:04:39.144 "keyring_file_remove_key", 00:04:39.144 "keyring_file_add_key", 00:04:39.144 "keyring_linux_set_options", 00:04:39.144 "iscsi_get_histogram", 00:04:39.144 "iscsi_enable_histogram", 00:04:39.144 "iscsi_set_options", 00:04:39.144 "iscsi_get_auth_groups", 00:04:39.144 "iscsi_auth_group_remove_secret", 00:04:39.144 "iscsi_auth_group_add_secret", 00:04:39.144 "iscsi_delete_auth_group", 00:04:39.144 "iscsi_create_auth_group", 00:04:39.144 "iscsi_set_discovery_auth", 00:04:39.144 "iscsi_get_options", 00:04:39.144 "iscsi_target_node_request_logout", 00:04:39.144 "iscsi_target_node_set_redirect", 00:04:39.144 "iscsi_target_node_set_auth", 00:04:39.144 "iscsi_target_node_add_lun", 00:04:39.144 "iscsi_get_stats", 00:04:39.144 "iscsi_get_connections", 00:04:39.144 "iscsi_portal_group_set_auth", 00:04:39.144 "iscsi_start_portal_group", 00:04:39.144 "iscsi_delete_portal_group", 00:04:39.144 "iscsi_create_portal_group", 00:04:39.144 "iscsi_get_portal_groups", 00:04:39.144 "iscsi_delete_target_node", 00:04:39.144 "iscsi_target_node_remove_pg_ig_maps", 00:04:39.144 "iscsi_target_node_add_pg_ig_maps", 00:04:39.144 "iscsi_create_target_node", 00:04:39.144 "iscsi_get_target_nodes", 00:04:39.144 "iscsi_delete_initiator_group", 00:04:39.144 "iscsi_initiator_group_remove_initiators", 00:04:39.144 "iscsi_initiator_group_add_initiators", 00:04:39.144 "iscsi_create_initiator_group", 00:04:39.144 "iscsi_get_initiator_groups", 00:04:39.144 "nvmf_set_crdt", 00:04:39.144 "nvmf_set_config", 00:04:39.144 "nvmf_set_max_subsystems", 00:04:39.144 "nvmf_stop_mdns_prr", 00:04:39.144 "nvmf_publish_mdns_prr", 00:04:39.144 "nvmf_subsystem_get_listeners", 00:04:39.144 "nvmf_subsystem_get_qpairs", 00:04:39.144 "nvmf_subsystem_get_controllers", 00:04:39.144 "nvmf_get_stats", 00:04:39.144 "nvmf_get_transports", 00:04:39.144 "nvmf_create_transport", 00:04:39.144 "nvmf_get_targets", 00:04:39.144 "nvmf_delete_target", 00:04:39.144 "nvmf_create_target", 00:04:39.144 "nvmf_subsystem_allow_any_host", 00:04:39.144 "nvmf_subsystem_remove_host", 00:04:39.144 "nvmf_subsystem_add_host", 00:04:39.144 "nvmf_ns_remove_host", 00:04:39.144 "nvmf_ns_add_host", 00:04:39.144 "nvmf_subsystem_remove_ns", 00:04:39.144 "nvmf_subsystem_add_ns", 00:04:39.144 "nvmf_subsystem_listener_set_ana_state", 00:04:39.144 "nvmf_discovery_get_referrals", 00:04:39.144 "nvmf_discovery_remove_referral", 00:04:39.144 "nvmf_discovery_add_referral", 00:04:39.144 "nvmf_subsystem_remove_listener", 00:04:39.144 "nvmf_subsystem_add_listener", 00:04:39.144 "nvmf_delete_subsystem", 00:04:39.144 "nvmf_create_subsystem", 00:04:39.144 "nvmf_get_subsystems", 00:04:39.144 "env_dpdk_get_mem_stats", 00:04:39.144 "nbd_get_disks", 00:04:39.144 "nbd_stop_disk", 00:04:39.144 "nbd_start_disk", 00:04:39.144 "ublk_recover_disk", 00:04:39.144 "ublk_get_disks", 00:04:39.144 "ublk_stop_disk", 00:04:39.144 "ublk_start_disk", 00:04:39.144 "ublk_destroy_target", 00:04:39.144 "ublk_create_target", 00:04:39.144 "virtio_blk_create_transport", 00:04:39.144 "virtio_blk_get_transports", 00:04:39.144 "vhost_controller_set_coalescing", 00:04:39.144 "vhost_get_controllers", 00:04:39.144 "vhost_delete_controller", 00:04:39.144 "vhost_create_blk_controller", 00:04:39.144 "vhost_scsi_controller_remove_target", 00:04:39.144 "vhost_scsi_controller_add_target", 00:04:39.144 "vhost_start_scsi_controller", 00:04:39.144 "vhost_create_scsi_controller", 00:04:39.144 "thread_set_cpumask", 00:04:39.144 "framework_get_governor", 00:04:39.144 "framework_get_scheduler", 00:04:39.144 "framework_set_scheduler", 00:04:39.144 "framework_get_reactors", 00:04:39.144 "thread_get_io_channels", 00:04:39.144 "thread_get_pollers", 00:04:39.144 "thread_get_stats", 00:04:39.144 "framework_monitor_context_switch", 00:04:39.144 "spdk_kill_instance", 00:04:39.144 "log_enable_timestamps", 00:04:39.144 "log_get_flags", 00:04:39.144 "log_clear_flag", 00:04:39.144 "log_set_flag", 00:04:39.144 "log_get_level", 00:04:39.144 "log_set_level", 00:04:39.144 "log_get_print_level", 00:04:39.144 "log_set_print_level", 00:04:39.144 "framework_enable_cpumask_locks", 00:04:39.144 "framework_disable_cpumask_locks", 00:04:39.144 "framework_wait_init", 00:04:39.144 "framework_start_init", 00:04:39.144 "scsi_get_devices", 00:04:39.144 "bdev_get_histogram", 00:04:39.144 "bdev_enable_histogram", 00:04:39.144 "bdev_set_qos_limit", 00:04:39.144 "bdev_set_qd_sampling_period", 00:04:39.144 "bdev_get_bdevs", 00:04:39.144 "bdev_reset_iostat", 00:04:39.144 "bdev_get_iostat", 00:04:39.144 "bdev_examine", 00:04:39.144 "bdev_wait_for_examine", 00:04:39.144 "bdev_set_options", 00:04:39.144 "notify_get_notifications", 00:04:39.144 "notify_get_types", 00:04:39.144 "accel_get_stats", 00:04:39.144 "accel_set_options", 00:04:39.144 "accel_set_driver", 00:04:39.144 "accel_crypto_key_destroy", 00:04:39.144 "accel_crypto_keys_get", 00:04:39.144 "accel_crypto_key_create", 00:04:39.144 "accel_assign_opc", 00:04:39.144 "accel_get_module_info", 00:04:39.144 "accel_get_opc_assignments", 00:04:39.144 "vmd_rescan", 00:04:39.144 "vmd_remove_device", 00:04:39.144 "vmd_enable", 00:04:39.144 "sock_get_default_impl", 00:04:39.144 "sock_set_default_impl", 00:04:39.144 "sock_impl_set_options", 00:04:39.144 "sock_impl_get_options", 00:04:39.144 "iobuf_get_stats", 00:04:39.144 "iobuf_set_options", 00:04:39.144 "keyring_get_keys", 00:04:39.144 "framework_get_pci_devices", 00:04:39.144 "framework_get_config", 00:04:39.144 "framework_get_subsystems", 00:04:39.144 "vfu_tgt_set_base_path", 00:04:39.144 "trace_get_info", 00:04:39.144 "trace_get_tpoint_group_mask", 00:04:39.144 "trace_disable_tpoint_group", 00:04:39.144 "trace_enable_tpoint_group", 00:04:39.144 "trace_clear_tpoint_mask", 00:04:39.144 "trace_set_tpoint_mask", 00:04:39.144 "spdk_get_version", 00:04:39.144 "rpc_get_methods" 00:04:39.144 ] 00:04:39.144 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.144 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:39.144 14:06:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 796512 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 796512 ']' 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 796512 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 796512 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 796512' 00:04:39.144 killing process with pid 796512 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 796512 00:04:39.144 14:06:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 796512 00:04:39.404 00:04:39.404 real 0m1.269s 00:04:39.404 user 0m2.222s 00:04:39.404 sys 0m0.451s 00:04:39.404 14:06:09 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.404 14:06:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:39.404 ************************************ 00:04:39.404 END TEST spdkcli_tcp 00:04:39.404 ************************************ 00:04:39.662 14:06:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:39.662 14:06:09 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.662 14:06:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.662 14:06:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.662 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:04:39.662 ************************************ 00:04:39.662 START TEST dpdk_mem_utility 00:04:39.662 ************************************ 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:39.662 * Looking for test storage... 00:04:39.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:39.662 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:39.662 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=797086 00:04:39.662 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:39.662 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 797086 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 797086 ']' 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.662 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:39.662 [2024-07-25 14:06:09.201424] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:39.662 [2024-07-25 14:06:09.201509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797086 ] 00:04:39.662 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.662 [2024-07-25 14:06:09.263688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.921 [2024-07-25 14:06:09.374148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.182 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.182 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:40.182 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:40.182 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:40.182 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.182 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.182 { 00:04:40.182 "filename": "/tmp/spdk_mem_dump.txt" 00:04:40.182 } 00:04:40.182 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.182 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:40.182 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:40.182 1 heaps totaling size 814.000000 MiB 00:04:40.182 size: 814.000000 MiB heap id: 0 00:04:40.182 end heaps---------- 00:04:40.182 8 mempools totaling size 598.116089 MiB 00:04:40.182 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:40.182 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:40.182 size: 84.521057 MiB name: bdev_io_797086 00:04:40.182 size: 51.011292 MiB name: evtpool_797086 00:04:40.182 size: 50.003479 MiB name: msgpool_797086 00:04:40.182 size: 21.763794 MiB name: PDU_Pool 00:04:40.182 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:40.182 size: 0.026123 MiB name: Session_Pool 00:04:40.182 end mempools------- 00:04:40.182 6 memzones totaling size 4.142822 MiB 00:04:40.182 size: 1.000366 MiB name: RG_ring_0_797086 00:04:40.182 size: 1.000366 MiB name: RG_ring_1_797086 00:04:40.182 size: 1.000366 MiB name: RG_ring_4_797086 00:04:40.182 size: 1.000366 MiB name: RG_ring_5_797086 00:04:40.182 size: 0.125366 MiB name: RG_ring_2_797086 00:04:40.182 size: 0.015991 MiB name: RG_ring_3_797086 00:04:40.182 end memzones------- 00:04:40.182 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:40.182 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:40.182 list of free elements. size: 12.519348 MiB 00:04:40.182 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:40.182 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:40.182 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:40.182 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:40.182 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:40.182 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:40.182 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:40.183 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:40.183 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:40.183 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:40.183 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:40.183 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:40.183 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:40.183 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:40.183 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:40.183 list of standard malloc elements. size: 199.218079 MiB 00:04:40.183 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:40.183 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:40.183 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:40.183 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:40.183 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:40.183 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:40.183 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:40.183 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:40.183 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:40.183 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:40.183 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:40.183 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:40.183 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:40.183 list of memzone associated elements. size: 602.262573 MiB 00:04:40.183 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:40.183 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:40.183 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:40.183 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:40.183 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:40.183 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_797086_0 00:04:40.183 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:40.183 associated memzone info: size: 48.002930 MiB name: MP_evtpool_797086_0 00:04:40.183 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:40.183 associated memzone info: size: 48.002930 MiB name: MP_msgpool_797086_0 00:04:40.183 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:40.183 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:40.183 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:40.183 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:40.183 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:40.183 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_797086 00:04:40.183 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:40.183 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_797086 00:04:40.183 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:40.183 associated memzone info: size: 1.007996 MiB name: MP_evtpool_797086 00:04:40.183 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:40.183 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:40.183 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:40.183 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:40.183 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:40.183 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:40.183 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:40.183 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:40.183 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:40.183 associated memzone info: size: 1.000366 MiB name: RG_ring_0_797086 00:04:40.183 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:40.183 associated memzone info: size: 1.000366 MiB name: RG_ring_1_797086 00:04:40.183 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:40.183 associated memzone info: size: 1.000366 MiB name: RG_ring_4_797086 00:04:40.183 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:40.183 associated memzone info: size: 1.000366 MiB name: RG_ring_5_797086 00:04:40.183 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:40.183 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_797086 00:04:40.183 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:40.183 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:40.183 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:40.183 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:40.183 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:40.183 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:40.183 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:40.183 associated memzone info: size: 0.125366 MiB name: RG_ring_2_797086 00:04:40.183 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:40.183 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:40.183 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:40.183 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:40.183 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:40.183 associated memzone info: size: 0.015991 MiB name: RG_ring_3_797086 00:04:40.183 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:40.183 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:40.183 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:40.183 associated memzone info: size: 0.000183 MiB name: MP_msgpool_797086 00:04:40.183 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:40.183 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_797086 00:04:40.183 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:40.183 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:40.183 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:40.183 14:06:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 797086 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 797086 ']' 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 797086 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 797086 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 797086' 00:04:40.183 killing process with pid 797086 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 797086 00:04:40.183 14:06:09 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 797086 00:04:40.752 00:04:40.752 real 0m1.111s 00:04:40.752 user 0m1.095s 00:04:40.752 sys 0m0.398s 00:04:40.752 14:06:10 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.752 14:06:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:40.752 ************************************ 00:04:40.752 END TEST dpdk_mem_utility 00:04:40.752 ************************************ 00:04:40.752 14:06:10 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.752 14:06:10 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.752 14:06:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.752 14:06:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.752 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:04:40.752 ************************************ 00:04:40.752 START TEST event 00:04:40.752 ************************************ 00:04:40.752 14:06:10 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:40.752 * Looking for test storage... 00:04:40.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:40.752 14:06:10 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:40.752 14:06:10 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:40.752 14:06:10 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.752 14:06:10 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:40.752 14:06:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.752 14:06:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.752 ************************************ 00:04:40.752 START TEST event_perf 00:04:40.752 ************************************ 00:04:40.752 14:06:10 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:40.752 Running I/O for 1 seconds...[2024-07-25 14:06:10.355220] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:40.752 [2024-07-25 14:06:10.355285] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797304 ] 00:04:40.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.012 [2024-07-25 14:06:10.419469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:41.012 [2024-07-25 14:06:10.529005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.012 [2024-07-25 14:06:10.529077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.012 [2024-07-25 14:06:10.529144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:41.012 [2024-07-25 14:06:10.529147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.391 Running I/O for 1 seconds... 00:04:42.391 lcore 0: 230736 00:04:42.391 lcore 1: 230736 00:04:42.391 lcore 2: 230736 00:04:42.391 lcore 3: 230736 00:04:42.391 done. 00:04:42.391 00:04:42.391 real 0m1.299s 00:04:42.391 user 0m4.212s 00:04:42.391 sys 0m0.079s 00:04:42.391 14:06:11 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.391 14:06:11 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.391 ************************************ 00:04:42.391 END TEST event_perf 00:04:42.391 ************************************ 00:04:42.391 14:06:11 event -- common/autotest_common.sh@1142 -- # return 0 00:04:42.392 14:06:11 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.392 14:06:11 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:42.392 14:06:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.392 14:06:11 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.392 ************************************ 00:04:42.392 START TEST event_reactor 00:04:42.392 ************************************ 00:04:42.392 14:06:11 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:42.392 [2024-07-25 14:06:11.695748] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:42.392 [2024-07-25 14:06:11.695815] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797468 ] 00:04:42.392 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.392 [2024-07-25 14:06:11.754008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.392 [2024-07-25 14:06:11.857063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.330 test_start 00:04:43.330 oneshot 00:04:43.330 tick 100 00:04:43.330 tick 100 00:04:43.330 tick 250 00:04:43.330 tick 100 00:04:43.330 tick 100 00:04:43.330 tick 100 00:04:43.330 tick 250 00:04:43.330 tick 500 00:04:43.330 tick 100 00:04:43.330 tick 100 00:04:43.330 tick 250 00:04:43.330 tick 100 00:04:43.330 tick 100 00:04:43.330 test_end 00:04:43.330 00:04:43.330 real 0m1.285s 00:04:43.330 user 0m1.202s 00:04:43.330 sys 0m0.078s 00:04:43.330 14:06:12 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.330 14:06:12 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:43.330 ************************************ 00:04:43.330 END TEST event_reactor 00:04:43.330 ************************************ 00:04:43.590 14:06:12 event -- common/autotest_common.sh@1142 -- # return 0 00:04:43.590 14:06:12 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.590 14:06:12 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:43.590 14:06:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.590 14:06:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.590 ************************************ 00:04:43.590 START TEST event_reactor_perf 00:04:43.590 ************************************ 00:04:43.590 14:06:13 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:43.590 [2024-07-25 14:06:13.032020] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:43.590 [2024-07-25 14:06:13.032110] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797627 ] 00:04:43.590 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.590 [2024-07-25 14:06:13.088959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.590 [2024-07-25 14:06:13.191900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.970 test_start 00:04:44.970 test_end 00:04:44.970 Performance: 447164 events per second 00:04:44.970 00:04:44.970 real 0m1.284s 00:04:44.970 user 0m1.196s 00:04:44.970 sys 0m0.082s 00:04:44.970 14:06:14 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.970 14:06:14 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:44.970 ************************************ 00:04:44.970 END TEST event_reactor_perf 00:04:44.970 ************************************ 00:04:44.970 14:06:14 event -- common/autotest_common.sh@1142 -- # return 0 00:04:44.970 14:06:14 event -- event/event.sh@49 -- # uname -s 00:04:44.970 14:06:14 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:44.970 14:06:14 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.970 14:06:14 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.970 14:06:14 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.970 14:06:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.970 ************************************ 00:04:44.970 START TEST event_scheduler 00:04:44.970 ************************************ 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:44.970 * Looking for test storage... 00:04:44.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:44.970 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:44.970 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=797807 00:04:44.970 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:44.970 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.970 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 797807 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 797807 ']' 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:44.970 14:06:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:44.970 [2024-07-25 14:06:14.450449] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:44.970 [2024-07-25 14:06:14.450522] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid797807 ] 00:04:44.970 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.970 [2024-07-25 14:06:14.509360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.970 [2024-07-25 14:06:14.617833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.970 [2024-07-25 14:06:14.621080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.970 [2024-07-25 14:06:14.621113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.970 [2024-07-25 14:06:14.621131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:45.229 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 [2024-07-25 14:06:14.653827] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:45.229 [2024-07-25 14:06:14.653852] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:45.229 [2024-07-25 14:06:14.653884] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:45.229 [2024-07-25 14:06:14.653895] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:45.229 [2024-07-25 14:06:14.653906] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 [2024-07-25 14:06:14.757616] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 ************************************ 00:04:45.229 START TEST scheduler_create_thread 00:04:45.229 ************************************ 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 2 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 3 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 4 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 5 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 6 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 7 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 8 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 9 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 10 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.229 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.489 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.489 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:45.489 14:06:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:45.489 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:45.489 14:06:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.748 14:06:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:45.748 00:04:45.748 real 0m0.589s 00:04:45.748 user 0m0.014s 00:04:45.748 sys 0m0.001s 00:04:45.748 14:06:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.748 14:06:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.748 ************************************ 00:04:45.748 END TEST scheduler_create_thread 00:04:45.748 ************************************ 00:04:45.748 14:06:15 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:04:45.748 14:06:15 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:45.748 14:06:15 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 797807 00:04:45.748 14:06:15 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 797807 ']' 00:04:45.748 14:06:15 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 797807 00:04:45.748 14:06:15 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 797807 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 797807' 00:04:46.007 killing process with pid 797807 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 797807 00:04:46.007 14:06:15 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 797807 00:04:46.266 [2024-07-25 14:06:15.853692] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:46.525 00:04:46.525 real 0m1.751s 00:04:46.525 user 0m2.162s 00:04:46.525 sys 0m0.321s 00:04:46.525 14:06:16 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.525 14:06:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:46.525 ************************************ 00:04:46.525 END TEST event_scheduler 00:04:46.525 ************************************ 00:04:46.525 14:06:16 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.525 14:06:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:46.525 14:06:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:46.525 14:06:16 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.525 14:06:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.525 14:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.525 ************************************ 00:04:46.525 START TEST app_repeat 00:04:46.525 ************************************ 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=798121 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 798121' 00:04:46.525 Process app_repeat pid: 798121 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:46.525 spdk_app_start Round 0 00:04:46.525 14:06:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 798121 /var/tmp/spdk-nbd.sock 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 798121 ']' 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:46.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.525 14:06:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:46.785 [2024-07-25 14:06:16.178524] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:04:46.785 [2024-07-25 14:06:16.178597] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid798121 ] 00:04:46.785 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.785 [2024-07-25 14:06:16.235936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.785 [2024-07-25 14:06:16.348518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.785 [2024-07-25 14:06:16.348521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.044 14:06:16 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.044 14:06:16 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:47.044 14:06:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.044 Malloc0 00:04:47.302 14:06:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.561 Malloc1 00:04:47.561 14:06:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.561 14:06:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.820 /dev/nbd0 00:04:47.820 14:06:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.820 14:06:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.820 1+0 records in 00:04:47.820 1+0 records out 00:04:47.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173691 s, 23.6 MB/s 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:47.820 14:06:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:47.820 14:06:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.820 14:06:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.820 14:06:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.079 /dev/nbd1 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.079 1+0 records in 00:04:48.079 1+0 records out 00:04:48.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231301 s, 17.7 MB/s 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.079 14:06:17 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.079 14:06:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.338 { 00:04:48.338 "nbd_device": "/dev/nbd0", 00:04:48.338 "bdev_name": "Malloc0" 00:04:48.338 }, 00:04:48.338 { 00:04:48.338 "nbd_device": "/dev/nbd1", 00:04:48.338 "bdev_name": "Malloc1" 00:04:48.338 } 00:04:48.338 ]' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.338 { 00:04:48.338 "nbd_device": "/dev/nbd0", 00:04:48.338 "bdev_name": "Malloc0" 00:04:48.338 }, 00:04:48.338 { 00:04:48.338 "nbd_device": "/dev/nbd1", 00:04:48.338 "bdev_name": "Malloc1" 00:04:48.338 } 00:04:48.338 ]' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.338 /dev/nbd1' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.338 /dev/nbd1' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.338 256+0 records in 00:04:48.338 256+0 records out 00:04:48.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490455 s, 214 MB/s 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.338 256+0 records in 00:04:48.338 256+0 records out 00:04:48.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227456 s, 46.1 MB/s 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.338 256+0 records in 00:04:48.338 256+0 records out 00:04:48.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239006 s, 43.9 MB/s 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.338 14:06:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.598 14:06:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.891 14:06:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.149 14:06:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.149 14:06:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.408 14:06:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.669 [2024-07-25 14:06:19.272692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.928 [2024-07-25 14:06:19.374839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.928 [2024-07-25 14:06:19.374839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.928 [2024-07-25 14:06:19.428432] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.928 [2024-07-25 14:06:19.428493] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.463 14:06:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.463 14:06:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.463 spdk_app_start Round 1 00:04:52.463 14:06:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 798121 /var/tmp/spdk-nbd.sock 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 798121 ']' 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:52.463 14:06:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.721 14:06:22 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.721 14:06:22 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:52.721 14:06:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.979 Malloc0 00:04:52.979 14:06:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.238 Malloc1 00:04:53.238 14:06:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.238 14:06:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:53.500 /dev/nbd0 00:04:53.500 14:06:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:53.500 14:06:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.500 1+0 records in 00:04:53.500 1+0 records out 00:04:53.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187454 s, 21.9 MB/s 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.500 14:06:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.500 14:06:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.500 14:06:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.500 14:06:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.761 /dev/nbd1 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.761 1+0 records in 00:04:53.761 1+0 records out 00:04:53.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165088 s, 24.8 MB/s 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:53.761 14:06:23 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.761 14:06:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.018 { 00:04:54.018 "nbd_device": "/dev/nbd0", 00:04:54.018 "bdev_name": "Malloc0" 00:04:54.018 }, 00:04:54.018 { 00:04:54.018 "nbd_device": "/dev/nbd1", 00:04:54.018 "bdev_name": "Malloc1" 00:04:54.018 } 00:04:54.018 ]' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.018 { 00:04:54.018 "nbd_device": "/dev/nbd0", 00:04:54.018 "bdev_name": "Malloc0" 00:04:54.018 }, 00:04:54.018 { 00:04:54.018 "nbd_device": "/dev/nbd1", 00:04:54.018 "bdev_name": "Malloc1" 00:04:54.018 } 00:04:54.018 ]' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.018 /dev/nbd1' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.018 /dev/nbd1' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.018 256+0 records in 00:04:54.018 256+0 records out 00:04:54.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496852 s, 211 MB/s 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:54.018 256+0 records in 00:04:54.018 256+0 records out 00:04:54.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209894 s, 50.0 MB/s 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:54.018 256+0 records in 00:04:54.018 256+0 records out 00:04:54.018 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243404 s, 43.1 MB/s 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.018 14:06:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.274 14:06:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:54.532 14:06:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:54.788 14:06:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.789 14:06:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.046 14:06:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.046 14:06:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:55.305 14:06:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:55.563 [2024-07-25 14:06:25.045012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.563 [2024-07-25 14:06:25.151183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.563 [2024-07-25 14:06:25.151187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.563 [2024-07-25 14:06:25.207298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:55.563 [2024-07-25 14:06:25.207362] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:58.847 14:06:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:58.847 14:06:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:58.847 spdk_app_start Round 2 00:04:58.847 14:06:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 798121 /var/tmp/spdk-nbd.sock 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 798121 ']' 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:58.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.847 14:06:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:58.847 14:06:28 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.847 14:06:28 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:58.847 14:06:28 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.847 Malloc0 00:04:58.847 14:06:28 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:59.107 Malloc1 00:04:59.107 14:06:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.107 14:06:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:59.365 /dev/nbd0 00:04:59.365 14:06:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:59.365 14:06:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.365 1+0 records in 00:04:59.365 1+0 records out 00:04:59.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019322 s, 21.2 MB/s 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.365 14:06:28 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.365 14:06:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.365 14:06:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.365 14:06:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:59.623 /dev/nbd1 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:59.623 1+0 records in 00:04:59.623 1+0 records out 00:04:59.623 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187604 s, 21.8 MB/s 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:59.623 14:06:29 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.623 14:06:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.880 14:06:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.881 { 00:04:59.881 "nbd_device": "/dev/nbd0", 00:04:59.881 "bdev_name": "Malloc0" 00:04:59.881 }, 00:04:59.881 { 00:04:59.881 "nbd_device": "/dev/nbd1", 00:04:59.881 "bdev_name": "Malloc1" 00:04:59.881 } 00:04:59.881 ]' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.881 { 00:04:59.881 "nbd_device": "/dev/nbd0", 00:04:59.881 "bdev_name": "Malloc0" 00:04:59.881 }, 00:04:59.881 { 00:04:59.881 "nbd_device": "/dev/nbd1", 00:04:59.881 "bdev_name": "Malloc1" 00:04:59.881 } 00:04:59.881 ]' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.881 /dev/nbd1' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.881 /dev/nbd1' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.881 256+0 records in 00:04:59.881 256+0 records out 00:04:59.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00492354 s, 213 MB/s 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.881 256+0 records in 00:04:59.881 256+0 records out 00:04:59.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240237 s, 43.6 MB/s 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.881 256+0 records in 00:04:59.881 256+0 records out 00:04:59.881 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245062 s, 42.8 MB/s 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.881 14:06:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:00.139 14:06:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:00.397 14:06:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.654 14:06:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.654 14:06:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.654 14:06:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.912 14:06:30 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.912 14:06:30 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:01.171 14:06:30 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:01.430 [2024-07-25 14:06:30.872288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.430 [2024-07-25 14:06:30.976726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.430 [2024-07-25 14:06:30.976727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.430 [2024-07-25 14:06:31.034260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.430 [2024-07-25 14:06:31.034348] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.987 14:06:33 event.app_repeat -- event/event.sh@38 -- # waitforlisten 798121 /var/tmp/spdk-nbd.sock 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 798121 ']' 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.987 14:06:33 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:04.244 14:06:33 event.app_repeat -- event/event.sh@39 -- # killprocess 798121 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 798121 ']' 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 798121 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 798121 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 798121' 00:05:04.244 killing process with pid 798121 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@967 -- # kill 798121 00:05:04.244 14:06:33 event.app_repeat -- common/autotest_common.sh@972 -- # wait 798121 00:05:04.503 spdk_app_start is called in Round 0. 00:05:04.503 Shutdown signal received, stop current app iteration 00:05:04.503 Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 reinitialization... 00:05:04.503 spdk_app_start is called in Round 1. 00:05:04.503 Shutdown signal received, stop current app iteration 00:05:04.503 Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 reinitialization... 00:05:04.503 spdk_app_start is called in Round 2. 00:05:04.503 Shutdown signal received, stop current app iteration 00:05:04.503 Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 reinitialization... 00:05:04.503 spdk_app_start is called in Round 3. 00:05:04.503 Shutdown signal received, stop current app iteration 00:05:04.503 14:06:34 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:04.503 14:06:34 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:04.503 00:05:04.503 real 0m17.977s 00:05:04.503 user 0m39.032s 00:05:04.503 sys 0m3.216s 00:05:04.503 14:06:34 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.503 14:06:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:04.503 ************************************ 00:05:04.503 END TEST app_repeat 00:05:04.503 ************************************ 00:05:04.762 14:06:34 event -- common/autotest_common.sh@1142 -- # return 0 00:05:04.762 14:06:34 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:04.762 14:06:34 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.762 14:06:34 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.762 14:06:34 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.762 14:06:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 ************************************ 00:05:04.762 START TEST cpu_locks 00:05:04.762 ************************************ 00:05:04.762 14:06:34 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:04.762 * Looking for test storage... 00:05:04.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:04.762 14:06:34 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:04.762 14:06:34 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:04.762 14:06:34 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:04.762 14:06:34 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:04.762 14:06:34 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.762 14:06:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.762 14:06:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 ************************************ 00:05:04.762 START TEST default_locks 00:05:04.762 ************************************ 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=800468 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 800468 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 800468 ']' 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:04.762 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.762 [2024-07-25 14:06:34.324755] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:04.762 [2024-07-25 14:06:34.324832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800468 ] 00:05:04.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.762 [2024-07-25 14:06:34.382854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.021 [2024-07-25 14:06:34.487622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.280 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.280 14:06:34 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:05.280 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 800468 00:05:05.280 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 800468 00:05:05.280 14:06:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.538 lslocks: write error 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 800468 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 800468 ']' 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 800468 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800468 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800468' 00:05:05.538 killing process with pid 800468 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 800468 00:05:05.538 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 800468 00:05:06.103 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 800468 00:05:06.103 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:06.103 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 800468 00:05:06.103 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:06.103 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 800468 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 800468 ']' 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (800468) - No such process 00:05:06.104 ERROR: process (pid: 800468) is no longer running 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.104 00:05:06.104 real 0m1.215s 00:05:06.104 user 0m1.178s 00:05:06.104 sys 0m0.511s 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.104 14:06:35 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.104 ************************************ 00:05:06.104 END TEST default_locks 00:05:06.104 ************************************ 00:05:06.104 14:06:35 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:06.104 14:06:35 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:06.104 14:06:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.104 14:06:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.104 14:06:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.104 ************************************ 00:05:06.104 START TEST default_locks_via_rpc 00:05:06.104 ************************************ 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=800636 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 800636 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 800636 ']' 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.104 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.104 [2024-07-25 14:06:35.582428] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:06.104 [2024-07-25 14:06:35.582519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800636 ] 00:05:06.104 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.104 [2024-07-25 14:06:35.638898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.104 [2024-07-25 14:06:35.738253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 800636 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 800636 00:05:06.362 14:06:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 800636 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 800636 ']' 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 800636 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800636 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800636' 00:05:06.930 killing process with pid 800636 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 800636 00:05:06.930 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 800636 00:05:07.193 00:05:07.193 real 0m1.203s 00:05:07.193 user 0m1.138s 00:05:07.193 sys 0m0.502s 00:05:07.193 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.193 14:06:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.193 ************************************ 00:05:07.193 END TEST default_locks_via_rpc 00:05:07.193 ************************************ 00:05:07.193 14:06:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:07.193 14:06:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:07.193 14:06:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.193 14:06:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.193 14:06:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:07.193 ************************************ 00:05:07.193 START TEST non_locking_app_on_locked_coremask 00:05:07.193 ************************************ 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=800797 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 800797 /var/tmp/spdk.sock 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 800797 ']' 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.193 14:06:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.193 [2024-07-25 14:06:36.835880] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:07.193 [2024-07-25 14:06:36.835984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800797 ] 00:05:07.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.468 [2024-07-25 14:06:36.894623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.468 [2024-07-25 14:06:37.004975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=800881 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 800881 /var/tmp/spdk2.sock 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 800881 ']' 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.737 14:06:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.737 [2024-07-25 14:06:37.289263] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:07.737 [2024-07-25 14:06:37.289348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid800881 ] 00:05:07.737 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.737 [2024-07-25 14:06:37.371923] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.737 [2024-07-25 14:06:37.371948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.997 [2024-07-25 14:06:37.580016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.932 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.932 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.932 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 800797 00:05:08.932 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 800797 00:05:08.932 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.191 lslocks: write error 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 800797 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 800797 ']' 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 800797 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800797 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800797' 00:05:09.191 killing process with pid 800797 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 800797 00:05:09.191 14:06:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 800797 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 800881 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 800881 ']' 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 800881 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 800881 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 800881' 00:05:10.129 killing process with pid 800881 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 800881 00:05:10.129 14:06:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 800881 00:05:10.697 00:05:10.697 real 0m3.259s 00:05:10.697 user 0m3.413s 00:05:10.697 sys 0m1.026s 00:05:10.697 14:06:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.697 14:06:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.697 ************************************ 00:05:10.697 END TEST non_locking_app_on_locked_coremask 00:05:10.697 ************************************ 00:05:10.697 14:06:40 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:10.697 14:06:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:10.697 14:06:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.697 14:06:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.697 14:06:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.697 ************************************ 00:05:10.697 START TEST locking_app_on_unlocked_coremask 00:05:10.697 ************************************ 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=801238 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 801238 /var/tmp/spdk.sock 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 801238 ']' 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.697 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.697 [2024-07-25 14:06:40.140049] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:10.697 [2024-07-25 14:06:40.140183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801238 ] 00:05:10.697 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.697 [2024-07-25 14:06:40.201300] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:10.697 [2024-07-25 14:06:40.201340] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.697 [2024-07-25 14:06:40.311543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=801250 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 801250 /var/tmp/spdk2.sock 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 801250 ']' 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.956 14:06:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.956 [2024-07-25 14:06:40.602827] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:10.956 [2024-07-25 14:06:40.602909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801250 ] 00:05:11.214 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.214 [2024-07-25 14:06:40.687582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.473 [2024-07-25 14:06:40.896197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.039 14:06:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.039 14:06:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:12.039 14:06:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 801250 00:05:12.039 14:06:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 801250 00:05:12.039 14:06:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.606 lslocks: write error 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 801238 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 801238 ']' 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 801238 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801238 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801238' 00:05:12.606 killing process with pid 801238 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 801238 00:05:12.606 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 801238 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 801250 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 801250 ']' 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 801250 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801250 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801250' 00:05:13.545 killing process with pid 801250 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 801250 00:05:13.545 14:06:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 801250 00:05:13.807 00:05:13.807 real 0m3.278s 00:05:13.807 user 0m3.426s 00:05:13.807 sys 0m1.019s 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.807 ************************************ 00:05:13.807 END TEST locking_app_on_unlocked_coremask 00:05:13.807 ************************************ 00:05:13.807 14:06:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:13.807 14:06:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:13.807 14:06:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.807 14:06:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.807 14:06:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.807 ************************************ 00:05:13.807 START TEST locking_app_on_locked_coremask 00:05:13.807 ************************************ 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=801669 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 801669 /var/tmp/spdk.sock 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 801669 ']' 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.807 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.069 [2024-07-25 14:06:43.475024] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:14.069 [2024-07-25 14:06:43.475155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801669 ] 00:05:14.069 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.069 [2024-07-25 14:06:43.532087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.069 [2024-07-25 14:06:43.642671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=801678 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 801678 /var/tmp/spdk2.sock 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 801678 /var/tmp/spdk2.sock 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 801678 /var/tmp/spdk2.sock 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 801678 ']' 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.329 14:06:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.329 [2024-07-25 14:06:43.930944] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:14.329 [2024-07-25 14:06:43.931036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801678 ] 00:05:14.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.589 [2024-07-25 14:06:44.016483] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 801669 has claimed it. 00:05:14.589 [2024-07-25 14:06:44.016538] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (801678) - No such process 00:05:15.158 ERROR: process (pid: 801678) is no longer running 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.158 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.159 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 801669 00:05:15.159 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 801669 00:05:15.159 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.417 lslocks: write error 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 801669 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 801669 ']' 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 801669 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801669 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801669' 00:05:15.417 killing process with pid 801669 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 801669 00:05:15.417 14:06:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 801669 00:05:15.984 00:05:15.984 real 0m1.974s 00:05:15.984 user 0m2.153s 00:05:15.984 sys 0m0.601s 00:05:15.984 14:06:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.984 14:06:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 ************************************ 00:05:15.984 END TEST locking_app_on_locked_coremask 00:05:15.984 ************************************ 00:05:15.984 14:06:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:15.984 14:06:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:15.984 14:06:45 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.984 14:06:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.984 14:06:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 ************************************ 00:05:15.984 START TEST locking_overlapped_coremask 00:05:15.984 ************************************ 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=801965 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 801965 /var/tmp/spdk.sock 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 801965 ']' 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.984 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.984 [2024-07-25 14:06:45.498985] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:15.984 [2024-07-25 14:06:45.499097] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801965 ] 00:05:15.984 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.984 [2024-07-25 14:06:45.555744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.245 [2024-07-25 14:06:45.668070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.245 [2024-07-25 14:06:45.668127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.245 [2024-07-25 14:06:45.668131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=801976 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 801976 /var/tmp/spdk2.sock 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 801976 /var/tmp/spdk2.sock 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 801976 /var/tmp/spdk2.sock 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 801976 ']' 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.503 14:06:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.503 [2024-07-25 14:06:45.972619] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:16.503 [2024-07-25 14:06:45.972701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid801976 ] 00:05:16.503 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.503 [2024-07-25 14:06:46.059559] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 801965 has claimed it. 00:05:16.503 [2024-07-25 14:06:46.059619] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:17.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (801976) - No such process 00:05:17.072 ERROR: process (pid: 801976) is no longer running 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 801965 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 801965 ']' 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 801965 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 801965 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 801965' 00:05:17.072 killing process with pid 801965 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 801965 00:05:17.072 14:06:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 801965 00:05:17.638 00:05:17.638 real 0m1.686s 00:05:17.638 user 0m4.467s 00:05:17.638 sys 0m0.462s 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:17.638 ************************************ 00:05:17.638 END TEST locking_overlapped_coremask 00:05:17.638 ************************************ 00:05:17.638 14:06:47 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:17.638 14:06:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:17.638 14:06:47 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.638 14:06:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.638 14:06:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.638 ************************************ 00:05:17.638 START TEST locking_overlapped_coremask_via_rpc 00:05:17.638 ************************************ 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=802146 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 802146 /var/tmp/spdk.sock 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 802146 ']' 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:17.638 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.638 [2024-07-25 14:06:47.234279] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:17.638 [2024-07-25 14:06:47.234360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802146 ] 00:05:17.638 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.638 [2024-07-25 14:06:47.290175] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.638 [2024-07-25 14:06:47.290222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.898 [2024-07-25 14:06:47.402250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.898 [2024-07-25 14:06:47.402308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.898 [2024-07-25 14:06:47.402311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=802271 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 802271 /var/tmp/spdk2.sock 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 802271 ']' 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:18.156 14:06:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.156 [2024-07-25 14:06:47.699861] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:18.156 [2024-07-25 14:06:47.699942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802271 ] 00:05:18.156 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.156 [2024-07-25 14:06:47.788580] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:18.156 [2024-07-25 14:06:47.788619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:18.414 [2024-07-25 14:06:48.011802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.414 [2024-07-25 14:06:48.011866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:18.414 [2024-07-25 14:06:48.011868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.352 [2024-07-25 14:06:48.671168] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 802146 has claimed it. 00:05:19.352 request: 00:05:19.352 { 00:05:19.352 "method": "framework_enable_cpumask_locks", 00:05:19.352 "req_id": 1 00:05:19.352 } 00:05:19.352 Got JSON-RPC error response 00:05:19.352 response: 00:05:19.352 { 00:05:19.352 "code": -32603, 00:05:19.352 "message": "Failed to claim CPU core: 2" 00:05:19.352 } 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 802146 /var/tmp/spdk.sock 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 802146 ']' 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 802271 /var/tmp/spdk2.sock 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 802271 ']' 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.352 14:06:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:19.610 00:05:19.610 real 0m1.997s 00:05:19.610 user 0m1.016s 00:05:19.610 sys 0m0.198s 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.610 14:06:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.610 ************************************ 00:05:19.610 END TEST locking_overlapped_coremask_via_rpc 00:05:19.610 ************************************ 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:19.610 14:06:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:19.610 14:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 802146 ]] 00:05:19.610 14:06:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 802146 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 802146 ']' 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 802146 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802146 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802146' 00:05:19.610 killing process with pid 802146 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 802146 00:05:19.610 14:06:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 802146 00:05:20.179 14:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 802271 ]] 00:05:20.179 14:06:49 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 802271 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 802271 ']' 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 802271 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802271 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802271' 00:05:20.179 killing process with pid 802271 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 802271 00:05:20.179 14:06:49 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 802271 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 802146 ]] 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 802146 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 802146 ']' 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 802146 00:05:20.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (802146) - No such process 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 802146 is not found' 00:05:20.749 Process with pid 802146 is not found 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 802271 ]] 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 802271 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 802271 ']' 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 802271 00:05:20.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (802271) - No such process 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 802271 is not found' 00:05:20.749 Process with pid 802271 is not found 00:05:20.749 14:06:50 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:20.749 00:05:20.749 real 0m15.961s 00:05:20.749 user 0m27.758s 00:05:20.749 sys 0m5.214s 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.749 14:06:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 ************************************ 00:05:20.749 END TEST cpu_locks 00:05:20.749 ************************************ 00:05:20.749 14:06:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.749 00:05:20.749 real 0m39.911s 00:05:20.749 user 1m15.706s 00:05:20.749 sys 0m9.222s 00:05:20.749 14:06:50 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.749 14:06:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 ************************************ 00:05:20.749 END TEST event 00:05:20.749 ************************************ 00:05:20.749 14:06:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:20.749 14:06:50 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:20.749 14:06:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.749 14:06:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.749 14:06:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 ************************************ 00:05:20.749 START TEST thread 00:05:20.749 ************************************ 00:05:20.749 14:06:50 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:20.749 * Looking for test storage... 00:05:20.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:20.749 14:06:50 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.749 14:06:50 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:20.749 14:06:50 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.749 14:06:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.749 ************************************ 00:05:20.749 START TEST thread_poller_perf 00:05:20.749 ************************************ 00:05:20.749 14:06:50 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:20.749 [2024-07-25 14:06:50.306301] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:20.749 [2024-07-25 14:06:50.306372] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802641 ] 00:05:20.749 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.749 [2024-07-25 14:06:50.365616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.008 [2024-07-25 14:06:50.474936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.008 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:22.071 ====================================== 00:05:22.071 busy:2713095393 (cyc) 00:05:22.071 total_run_count: 367000 00:05:22.071 tsc_hz: 2700000000 (cyc) 00:05:22.071 ====================================== 00:05:22.071 poller_cost: 7392 (cyc), 2737 (nsec) 00:05:22.071 00:05:22.071 real 0m1.300s 00:05:22.071 user 0m1.213s 00:05:22.071 sys 0m0.081s 00:05:22.071 14:06:51 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.071 14:06:51 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.071 ************************************ 00:05:22.071 END TEST thread_poller_perf 00:05:22.071 ************************************ 00:05:22.331 14:06:51 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:22.331 14:06:51 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.331 14:06:51 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:22.331 14:06:51 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.331 14:06:51 thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.331 ************************************ 00:05:22.331 START TEST thread_poller_perf 00:05:22.331 ************************************ 00:05:22.331 14:06:51 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:22.331 [2024-07-25 14:06:51.656039] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:22.331 [2024-07-25 14:06:51.656124] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid802798 ] 00:05:22.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.331 [2024-07-25 14:06:51.713417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.331 [2024-07-25 14:06:51.817162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.331 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:23.709 ====================================== 00:05:23.709 busy:2702556033 (cyc) 00:05:23.709 total_run_count: 4859000 00:05:23.709 tsc_hz: 2700000000 (cyc) 00:05:23.709 ====================================== 00:05:23.709 poller_cost: 556 (cyc), 205 (nsec) 00:05:23.709 00:05:23.709 real 0m1.285s 00:05:23.709 user 0m1.208s 00:05:23.709 sys 0m0.071s 00:05:23.709 14:06:52 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.709 14:06:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 ************************************ 00:05:23.709 END TEST thread_poller_perf 00:05:23.709 ************************************ 00:05:23.709 14:06:52 thread -- common/autotest_common.sh@1142 -- # return 0 00:05:23.709 14:06:52 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:23.709 00:05:23.709 real 0m2.738s 00:05:23.709 user 0m2.476s 00:05:23.709 sys 0m0.262s 00:05:23.709 14:06:52 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.709 14:06:52 thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 ************************************ 00:05:23.709 END TEST thread 00:05:23.709 ************************************ 00:05:23.709 14:06:52 -- common/autotest_common.sh@1142 -- # return 0 00:05:23.709 14:06:52 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:23.709 14:06:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.709 14:06:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.709 14:06:52 -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 ************************************ 00:05:23.709 START TEST accel 00:05:23.709 ************************************ 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:23.709 * Looking for test storage... 00:05:23.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:23.709 14:06:53 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:23.709 14:06:53 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:23.709 14:06:53 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.709 14:06:53 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=803106 00:05:23.709 14:06:53 accel -- accel/accel.sh@63 -- # waitforlisten 803106 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@829 -- # '[' -z 803106 ']' 00:05:23.709 14:06:53 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.709 14:06:53 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.709 14:06:53 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.709 14:06:53 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.709 14:06:53 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:23.709 14:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.709 14:06:53 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:23.709 14:06:53 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:23.709 14:06:53 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:23.709 14:06:53 accel -- accel/accel.sh@41 -- # jq -r . 00:05:23.709 [2024-07-25 14:06:53.109371] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:23.709 [2024-07-25 14:06:53.109462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803106 ] 00:05:23.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.709 [2024-07-25 14:06:53.165960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.709 [2024-07-25 14:06:53.270961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@862 -- # return 0 00:05:23.975 14:06:53 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:23.975 14:06:53 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:23.975 14:06:53 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:23.975 14:06:53 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:23.975 14:06:53 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:23.975 14:06:53 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.975 14:06:53 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # IFS== 00:05:23.975 14:06:53 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:23.975 14:06:53 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:23.975 14:06:53 accel -- accel/accel.sh@75 -- # killprocess 803106 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@948 -- # '[' -z 803106 ']' 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@952 -- # kill -0 803106 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@953 -- # uname 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 803106 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 803106' 00:05:23.975 killing process with pid 803106 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@967 -- # kill 803106 00:05:23.975 14:06:53 accel -- common/autotest_common.sh@972 -- # wait 803106 00:05:24.575 14:06:54 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:24.575 14:06:54 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.575 14:06:54 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:24.575 14:06:54 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:24.575 14:06:54 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.575 14:06:54 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:24.575 14:06:54 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.575 14:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.575 ************************************ 00:05:24.575 START TEST accel_missing_filename 00:05:24.575 ************************************ 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:24.575 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:24.575 14:06:54 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:24.575 [2024-07-25 14:06:54.133820] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:24.575 [2024-07-25 14:06:54.133884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803204 ] 00:05:24.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.575 [2024-07-25 14:06:54.194431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.834 [2024-07-25 14:06:54.299966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.834 [2024-07-25 14:06:54.356854] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.834 [2024-07-25 14:06:54.434589] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:25.093 A filename is required. 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.093 00:05:25.093 real 0m0.425s 00:05:25.093 user 0m0.314s 00:05:25.093 sys 0m0.145s 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.093 14:06:54 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:25.093 ************************************ 00:05:25.093 END TEST accel_missing_filename 00:05:25.093 ************************************ 00:05:25.093 14:06:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.093 14:06:54 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.093 14:06:54 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:25.093 14:06:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.093 14:06:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.093 ************************************ 00:05:25.093 START TEST accel_compress_verify 00:05:25.093 ************************************ 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.093 14:06:54 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:25.093 14:06:54 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:25.093 [2024-07-25 14:06:54.606635] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:25.093 [2024-07-25 14:06:54.606700] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803308 ] 00:05:25.093 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.093 [2024-07-25 14:06:54.663631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.352 [2024-07-25 14:06:54.769100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.352 [2024-07-25 14:06:54.825145] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:25.352 [2024-07-25 14:06:54.906786] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:25.611 00:05:25.611 Compression does not support the verify option, aborting. 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.611 00:05:25.611 real 0m0.430s 00:05:25.611 user 0m0.326s 00:05:25.611 sys 0m0.137s 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.611 14:06:55 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:25.611 ************************************ 00:05:25.611 END TEST accel_compress_verify 00:05:25.611 ************************************ 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.611 14:06:55 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.611 ************************************ 00:05:25.611 START TEST accel_wrong_workload 00:05:25.611 ************************************ 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:25.611 14:06:55 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:25.611 Unsupported workload type: foobar 00:05:25.611 [2024-07-25 14:06:55.083698] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:25.611 accel_perf options: 00:05:25.611 [-h help message] 00:05:25.611 [-q queue depth per core] 00:05:25.611 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.611 [-T number of threads per core 00:05:25.611 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.611 [-t time in seconds] 00:05:25.611 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.611 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:25.611 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.611 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.611 [-S for crc32c workload, use this seed value (default 0) 00:05:25.611 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.611 [-f for fill workload, use this BYTE value (default 255) 00:05:25.611 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.611 [-y verify result if this switch is on] 00:05:25.611 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.611 Can be used to spread operations across a wider range of memory. 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.611 00:05:25.611 real 0m0.024s 00:05:25.611 user 0m0.013s 00:05:25.611 sys 0m0.011s 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.611 14:06:55 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:25.611 ************************************ 00:05:25.611 END TEST accel_wrong_workload 00:05:25.611 ************************************ 00:05:25.611 Error: writing output failed: Broken pipe 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.611 14:06:55 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.611 14:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.611 ************************************ 00:05:25.611 START TEST accel_negative_buffers 00:05:25.611 ************************************ 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:25.611 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:25.611 14:06:55 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:25.611 -x option must be non-negative. 00:05:25.611 [2024-07-25 14:06:55.153215] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:25.611 accel_perf options: 00:05:25.612 [-h help message] 00:05:25.612 [-q queue depth per core] 00:05:25.612 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:25.612 [-T number of threads per core 00:05:25.612 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:25.612 [-t time in seconds] 00:05:25.612 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:25.612 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:25.612 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:25.612 [-l for compress/decompress workloads, name of uncompressed input file 00:05:25.612 [-S for crc32c workload, use this seed value (default 0) 00:05:25.612 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:25.612 [-f for fill workload, use this BYTE value (default 255) 00:05:25.612 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:25.612 [-y verify result if this switch is on] 00:05:25.612 [-a tasks to allocate per core (default: same value as -q)] 00:05:25.612 Can be used to spread operations across a wider range of memory. 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:25.612 00:05:25.612 real 0m0.022s 00:05:25.612 user 0m0.012s 00:05:25.612 sys 0m0.011s 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.612 14:06:55 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 ************************************ 00:05:25.612 END TEST accel_negative_buffers 00:05:25.612 ************************************ 00:05:25.612 Error: writing output failed: Broken pipe 00:05:25.612 14:06:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.612 14:06:55 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:25.612 14:06:55 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:25.612 14:06:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.612 14:06:55 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.612 ************************************ 00:05:25.612 START TEST accel_crc32c 00:05:25.612 ************************************ 00:05:25.612 14:06:55 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:25.612 14:06:55 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:25.612 [2024-07-25 14:06:55.217193] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:25.612 [2024-07-25 14:06:55.217251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803386 ] 00:05:25.612 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.872 [2024-07-25 14:06:55.274452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.872 [2024-07-25 14:06:55.377824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:25.872 14:06:55 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:27.252 14:06:56 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.252 00:05:27.252 real 0m1.432s 00:05:27.252 user 0m1.298s 00:05:27.252 sys 0m0.136s 00:05:27.252 14:06:56 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.252 14:06:56 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:27.252 ************************************ 00:05:27.252 END TEST accel_crc32c 00:05:27.252 ************************************ 00:05:27.252 14:06:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.252 14:06:56 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:27.252 14:06:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:27.252 14:06:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.252 14:06:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.252 ************************************ 00:05:27.252 START TEST accel_crc32c_C2 00:05:27.252 ************************************ 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:27.252 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:27.252 [2024-07-25 14:06:56.701702] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:27.252 [2024-07-25 14:06:56.701767] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803646 ] 00:05:27.252 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.252 [2024-07-25 14:06:56.760161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.252 [2024-07-25 14:06:56.870485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.512 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:27.513 14:06:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.896 00:05:28.896 real 0m1.431s 00:05:28.896 user 0m1.290s 00:05:28.896 sys 0m0.142s 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.896 14:06:58 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:28.896 ************************************ 00:05:28.896 END TEST accel_crc32c_C2 00:05:28.896 ************************************ 00:05:28.896 14:06:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:28.896 14:06:58 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:28.896 14:06:58 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.896 14:06:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.896 14:06:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.896 ************************************ 00:05:28.896 START TEST accel_copy 00:05:28.896 ************************************ 00:05:28.896 14:06:58 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:28.896 [2024-07-25 14:06:58.180002] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:28.896 [2024-07-25 14:06:58.180074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803802 ] 00:05:28.896 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.896 [2024-07-25 14:06:58.235778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.896 [2024-07-25 14:06:58.343271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:28.896 14:06:58 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:30.274 14:06:59 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.274 00:05:30.274 real 0m1.433s 00:05:30.274 user 0m1.296s 00:05:30.274 sys 0m0.138s 00:05:30.274 14:06:59 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.274 14:06:59 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:30.274 ************************************ 00:05:30.274 END TEST accel_copy 00:05:30.274 ************************************ 00:05:30.274 14:06:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.274 14:06:59 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.274 14:06:59 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:30.274 14:06:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.274 14:06:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.274 ************************************ 00:05:30.274 START TEST accel_fill 00:05:30.274 ************************************ 00:05:30.274 14:06:59 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:30.274 [2024-07-25 14:06:59.659801] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:30.274 [2024-07-25 14:06:59.659866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid803966 ] 00:05:30.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.274 [2024-07-25 14:06:59.716266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.274 [2024-07-25 14:06:59.821032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.274 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:30.275 14:06:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:31.651 14:07:01 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.651 00:05:31.651 real 0m1.424s 00:05:31.651 user 0m1.286s 00:05:31.651 sys 0m0.139s 00:05:31.651 14:07:01 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.651 14:07:01 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:31.651 ************************************ 00:05:31.651 END TEST accel_fill 00:05:31.651 ************************************ 00:05:31.651 14:07:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:31.651 14:07:01 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:31.651 14:07:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.651 14:07:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.651 14:07:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.651 ************************************ 00:05:31.651 START TEST accel_copy_crc32c 00:05:31.651 ************************************ 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:31.651 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:31.651 [2024-07-25 14:07:01.135374] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:31.652 [2024-07-25 14:07:01.135436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804232 ] 00:05:31.652 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.652 [2024-07-25 14:07:01.191778] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.652 [2024-07-25 14:07:01.295562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.911 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:31.912 14:07:01 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.288 00:05:33.288 real 0m1.418s 00:05:33.288 user 0m1.292s 00:05:33.288 sys 0m0.127s 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.288 14:07:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:33.288 ************************************ 00:05:33.288 END TEST accel_copy_crc32c 00:05:33.288 ************************************ 00:05:33.288 14:07:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:33.288 14:07:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.288 14:07:02 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:33.288 14:07:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.288 14:07:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.288 ************************************ 00:05:33.288 START TEST accel_copy_crc32c_C2 00:05:33.288 ************************************ 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.288 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:33.289 [2024-07-25 14:07:02.601771] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:33.289 [2024-07-25 14:07:02.601831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804396 ] 00:05:33.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.289 [2024-07-25 14:07:02.657942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.289 [2024-07-25 14:07:02.763360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:33.289 14:07:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.667 00:05:34.667 real 0m1.433s 00:05:34.667 user 0m1.294s 00:05:34.667 sys 0m0.140s 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.667 14:07:04 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:34.667 ************************************ 00:05:34.667 END TEST accel_copy_crc32c_C2 00:05:34.667 ************************************ 00:05:34.667 14:07:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.667 14:07:04 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:34.667 14:07:04 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.667 14:07:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.667 14:07:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.667 ************************************ 00:05:34.667 START TEST accel_dualcast 00:05:34.667 ************************************ 00:05:34.667 14:07:04 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:34.667 [2024-07-25 14:07:04.082836] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:34.667 [2024-07-25 14:07:04.082897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804549 ] 00:05:34.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.667 [2024-07-25 14:07:04.139828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.667 [2024-07-25 14:07:04.243356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.667 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:34.668 14:07:04 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:36.048 14:07:05 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.048 00:05:36.048 real 0m1.430s 00:05:36.048 user 0m1.287s 00:05:36.048 sys 0m0.144s 00:05:36.048 14:07:05 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.048 14:07:05 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 ************************************ 00:05:36.048 END TEST accel_dualcast 00:05:36.048 ************************************ 00:05:36.048 14:07:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:36.048 14:07:05 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:36.048 14:07:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:36.048 14:07:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.048 14:07:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 ************************************ 00:05:36.048 START TEST accel_compare 00:05:36.048 ************************************ 00:05:36.048 14:07:05 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:36.048 14:07:05 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:36.048 [2024-07-25 14:07:05.560438] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:36.048 [2024-07-25 14:07:05.560500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804826 ] 00:05:36.048 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.048 [2024-07-25 14:07:05.616777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.307 [2024-07-25 14:07:05.721360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:36.307 14:07:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:37.687 14:07:06 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.687 00:05:37.687 real 0m1.431s 00:05:37.687 user 0m1.299s 00:05:37.687 sys 0m0.133s 00:05:37.687 14:07:06 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.687 14:07:06 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:37.687 ************************************ 00:05:37.687 END TEST accel_compare 00:05:37.687 ************************************ 00:05:37.687 14:07:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:37.687 14:07:06 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:37.687 14:07:06 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:37.687 14:07:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.687 14:07:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:37.687 ************************************ 00:05:37.687 START TEST accel_xor 00:05:37.687 ************************************ 00:05:37.687 14:07:07 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:37.687 [2024-07-25 14:07:07.041089] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:37.687 [2024-07-25 14:07:07.041164] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid804981 ] 00:05:37.687 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.687 [2024-07-25 14:07:07.098714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.687 [2024-07-25 14:07:07.206783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.687 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:37.688 14:07:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.062 00:05:39.062 real 0m1.427s 00:05:39.062 user 0m1.299s 00:05:39.062 sys 0m0.129s 00:05:39.062 14:07:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.062 14:07:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:39.062 ************************************ 00:05:39.062 END TEST accel_xor 00:05:39.062 ************************************ 00:05:39.062 14:07:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:39.062 14:07:08 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:39.062 14:07:08 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:39.062 14:07:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.062 14:07:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.062 ************************************ 00:05:39.062 START TEST accel_xor 00:05:39.062 ************************************ 00:05:39.062 14:07:08 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:39.062 14:07:08 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:39.062 [2024-07-25 14:07:08.517219] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:39.062 [2024-07-25 14:07:08.517282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805141 ] 00:05:39.062 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.062 [2024-07-25 14:07:08.574135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.062 [2024-07-25 14:07:08.678172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:39.321 14:07:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:40.700 14:07:09 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.700 00:05:40.700 real 0m1.423s 00:05:40.700 user 0m1.295s 00:05:40.700 sys 0m0.129s 00:05:40.700 14:07:09 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.700 14:07:09 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 ************************************ 00:05:40.700 END TEST accel_xor 00:05:40.700 ************************************ 00:05:40.700 14:07:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:40.700 14:07:09 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:40.700 14:07:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:40.700 14:07:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.700 14:07:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:40.700 ************************************ 00:05:40.700 START TEST accel_dif_verify 00:05:40.700 ************************************ 00:05:40.700 14:07:09 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:40.700 14:07:09 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:40.700 [2024-07-25 14:07:09.985517] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:40.700 [2024-07-25 14:07:09.985576] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805326 ] 00:05:40.700 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.700 [2024-07-25 14:07:10.047473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.700 [2024-07-25 14:07:10.153020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:40.700 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:40.701 14:07:10 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:42.079 14:07:11 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.079 00:05:42.079 real 0m1.440s 00:05:42.079 user 0m1.310s 00:05:42.079 sys 0m0.132s 00:05:42.079 14:07:11 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.079 14:07:11 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:42.079 ************************************ 00:05:42.079 END TEST accel_dif_verify 00:05:42.079 ************************************ 00:05:42.079 14:07:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:42.079 14:07:11 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:42.079 14:07:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:42.079 14:07:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.079 14:07:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.079 ************************************ 00:05:42.079 START TEST accel_dif_generate 00:05:42.079 ************************************ 00:05:42.079 14:07:11 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:42.079 [2024-07-25 14:07:11.475448] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:42.079 [2024-07-25 14:07:11.475511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805571 ] 00:05:42.079 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.079 [2024-07-25 14:07:11.531663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.079 [2024-07-25 14:07:11.640504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.079 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:42.080 14:07:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.489 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:43.490 14:07:12 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.490 00:05:43.490 real 0m1.432s 00:05:43.490 user 0m1.294s 00:05:43.490 sys 0m0.140s 00:05:43.490 14:07:12 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.490 14:07:12 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:43.490 ************************************ 00:05:43.490 END TEST accel_dif_generate 00:05:43.490 ************************************ 00:05:43.490 14:07:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:43.490 14:07:12 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:43.490 14:07:12 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:43.490 14:07:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.490 14:07:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.490 ************************************ 00:05:43.490 START TEST accel_dif_generate_copy 00:05:43.490 ************************************ 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:43.490 14:07:12 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:43.490 [2024-07-25 14:07:12.954659] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:43.490 [2024-07-25 14:07:12.954721] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805731 ] 00:05:43.490 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.490 [2024-07-25 14:07:13.011435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.490 [2024-07-25 14:07:13.114427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.749 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:43.750 14:07:13 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.129 00:05:45.129 real 0m1.433s 00:05:45.129 user 0m1.297s 00:05:45.129 sys 0m0.137s 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.129 14:07:14 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 ************************************ 00:05:45.129 END TEST accel_dif_generate_copy 00:05:45.129 ************************************ 00:05:45.129 14:07:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:45.129 14:07:14 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:45.129 14:07:14 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.129 14:07:14 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:45.129 14:07:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.129 14:07:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.129 ************************************ 00:05:45.129 START TEST accel_comp 00:05:45.129 ************************************ 00:05:45.129 14:07:14 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:45.129 [2024-07-25 14:07:14.431400] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:45.129 [2024-07-25 14:07:14.431460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid805898 ] 00:05:45.129 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.129 [2024-07-25 14:07:14.489088] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.129 [2024-07-25 14:07:14.594087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:45.129 14:07:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:45.130 14:07:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.130 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:45.130 14:07:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:46.508 14:07:15 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.508 00:05:46.508 real 0m1.433s 00:05:46.508 user 0m1.297s 00:05:46.508 sys 0m0.137s 00:05:46.508 14:07:15 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.508 14:07:15 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:46.508 ************************************ 00:05:46.508 END TEST accel_comp 00:05:46.508 ************************************ 00:05:46.508 14:07:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:46.508 14:07:15 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:46.508 14:07:15 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:46.508 14:07:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.508 14:07:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.508 ************************************ 00:05:46.508 START TEST accel_decomp 00:05:46.508 ************************************ 00:05:46.508 14:07:15 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:46.508 14:07:15 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:46.508 [2024-07-25 14:07:15.919207] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:46.508 [2024-07-25 14:07:15.919271] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806161 ] 00:05:46.508 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.508 [2024-07-25 14:07:15.977136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.508 [2024-07-25 14:07:16.078822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:46.508 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:46.509 14:07:16 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.885 14:07:17 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.885 00:05:47.885 real 0m1.435s 00:05:47.885 user 0m1.303s 00:05:47.885 sys 0m0.134s 00:05:47.885 14:07:17 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.885 14:07:17 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:47.885 ************************************ 00:05:47.885 END TEST accel_decomp 00:05:47.885 ************************************ 00:05:47.885 14:07:17 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:47.885 14:07:17 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:47.885 14:07:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:47.885 14:07:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.885 14:07:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.885 ************************************ 00:05:47.885 START TEST accel_decomp_full 00:05:47.885 ************************************ 00:05:47.885 14:07:17 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:47.885 14:07:17 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:47.886 14:07:17 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:47.886 [2024-07-25 14:07:17.400602] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:47.886 [2024-07-25 14:07:17.400680] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806314 ] 00:05:47.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.886 [2024-07-25 14:07:17.458012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.144 [2024-07-25 14:07:17.559301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.144 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:48.145 14:07:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:49.526 14:07:18 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.526 00:05:49.526 real 0m1.431s 00:05:49.526 user 0m1.298s 00:05:49.526 sys 0m0.135s 00:05:49.526 14:07:18 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.526 14:07:18 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:49.526 ************************************ 00:05:49.526 END TEST accel_decomp_full 00:05:49.526 ************************************ 00:05:49.526 14:07:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:49.526 14:07:18 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:49.526 14:07:18 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:49.526 14:07:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.526 14:07:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.527 ************************************ 00:05:49.527 START TEST accel_decomp_mcore 00:05:49.527 ************************************ 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:49.527 14:07:18 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:49.527 [2024-07-25 14:07:18.880408] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:49.527 [2024-07-25 14:07:18.880469] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806478 ] 00:05:49.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.527 [2024-07-25 14:07:18.938438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.527 [2024-07-25 14:07:19.052229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.527 [2024-07-25 14:07:19.052296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.527 [2024-07-25 14:07:19.052293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.527 [2024-07-25 14:07:19.052248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.527 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:49.528 14:07:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.903 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.904 00:05:50.904 real 0m1.452s 00:05:50.904 user 0m4.738s 00:05:50.904 sys 0m0.152s 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.904 14:07:20 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:50.904 ************************************ 00:05:50.904 END TEST accel_decomp_mcore 00:05:50.904 ************************************ 00:05:50.904 14:07:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:50.904 14:07:20 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.904 14:07:20 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:50.904 14:07:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.904 14:07:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.904 ************************************ 00:05:50.904 START TEST accel_decomp_full_mcore 00:05:50.904 ************************************ 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:50.904 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:50.904 [2024-07-25 14:07:20.382812] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:50.904 [2024-07-25 14:07:20.382874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806752 ] 00:05:50.904 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.904 [2024-07-25 14:07:20.440207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:50.904 [2024-07-25 14:07:20.547644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:50.904 [2024-07-25 14:07:20.547750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:50.904 [2024-07-25 14:07:20.547850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:50.904 [2024-07-25 14:07:20.547853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.164 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:51.165 14:07:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.545 00:05:52.545 real 0m1.465s 00:05:52.545 user 0m4.808s 00:05:52.545 sys 0m0.152s 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.545 14:07:21 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:52.545 ************************************ 00:05:52.545 END TEST accel_decomp_full_mcore 00:05:52.545 ************************************ 00:05:52.545 14:07:21 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:52.545 14:07:21 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.545 14:07:21 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:52.545 14:07:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.545 14:07:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.545 ************************************ 00:05:52.545 START TEST accel_decomp_mthread 00:05:52.545 ************************************ 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:52.545 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:52.546 14:07:21 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:52.546 [2024-07-25 14:07:21.893859] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:52.546 [2024-07-25 14:07:21.893941] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid806912 ] 00:05:52.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.546 [2024-07-25 14:07:21.954922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.546 [2024-07-25 14:07:22.057505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.546 14:07:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.924 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.925 00:05:53.925 real 0m1.442s 00:05:53.925 user 0m1.303s 00:05:53.925 sys 0m0.140s 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.925 14:07:23 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:53.925 ************************************ 00:05:53.925 END TEST accel_decomp_mthread 00:05:53.925 ************************************ 00:05:53.925 14:07:23 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:53.925 14:07:23 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.925 14:07:23 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:53.925 14:07:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.925 14:07:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.925 ************************************ 00:05:53.925 START TEST accel_decomp_full_mthread 00:05:53.925 ************************************ 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:53.925 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:53.925 [2024-07-25 14:07:23.382253] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:53.925 [2024-07-25 14:07:23.382316] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807083 ] 00:05:53.925 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.925 [2024-07-25 14:07:23.438631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.925 [2024-07-25 14:07:23.548201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:54.184 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:54.185 14:07:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.568 00:05:55.568 real 0m1.459s 00:05:55.568 user 0m1.331s 00:05:55.568 sys 0m0.130s 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.568 14:07:24 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:55.568 ************************************ 00:05:55.568 END TEST accel_decomp_full_mthread 00:05:55.568 ************************************ 00:05:55.568 14:07:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.568 14:07:24 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:55.568 14:07:24 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:55.568 14:07:24 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:55.568 14:07:24 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:55.568 14:07:24 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.568 14:07:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.568 14:07:24 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.568 14:07:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.568 14:07:24 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.568 14:07:24 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.568 14:07:24 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.568 14:07:24 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:55.568 14:07:24 accel -- accel/accel.sh@41 -- # jq -r . 00:05:55.568 ************************************ 00:05:55.568 START TEST accel_dif_functional_tests 00:05:55.568 ************************************ 00:05:55.568 14:07:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:55.568 [2024-07-25 14:07:24.911728] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:55.568 [2024-07-25 14:07:24.911791] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807364 ] 00:05:55.568 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.568 [2024-07-25 14:07:24.967855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.568 [2024-07-25 14:07:25.072585] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.568 [2024-07-25 14:07:25.072646] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.568 [2024-07-25 14:07:25.072649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.568 00:05:55.568 00:05:55.569 CUnit - A unit testing framework for C - Version 2.1-3 00:05:55.569 http://cunit.sourceforge.net/ 00:05:55.569 00:05:55.569 00:05:55.569 Suite: accel_dif 00:05:55.569 Test: verify: DIF generated, GUARD check ...passed 00:05:55.569 Test: verify: DIF generated, APPTAG check ...passed 00:05:55.569 Test: verify: DIF generated, REFTAG check ...passed 00:05:55.569 Test: verify: DIF not generated, GUARD check ...[2024-07-25 14:07:25.168494] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:55.569 passed 00:05:55.569 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 14:07:25.168580] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:55.569 passed 00:05:55.569 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 14:07:25.168615] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:55.569 passed 00:05:55.569 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:55.569 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 14:07:25.168683] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:55.569 passed 00:05:55.569 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:55.569 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:55.569 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:55.569 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 14:07:25.168814] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:55.569 passed 00:05:55.569 Test: verify copy: DIF generated, GUARD check ...passed 00:05:55.569 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:55.569 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:55.569 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 14:07:25.168968] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:55.569 passed 00:05:55.569 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 14:07:25.169005] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:55.569 passed 00:05:55.569 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 14:07:25.169054] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:55.569 passed 00:05:55.569 Test: generate copy: DIF generated, GUARD check ...passed 00:05:55.569 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:55.569 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:55.569 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:55.569 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:55.569 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:55.569 Test: generate copy: iovecs-len validate ...[2024-07-25 14:07:25.169294] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:55.569 passed 00:05:55.569 Test: generate copy: buffer alignment validate ...passed 00:05:55.569 00:05:55.569 Run Summary: Type Total Ran Passed Failed Inactive 00:05:55.569 suites 1 1 n/a 0 0 00:05:55.569 tests 26 26 26 0 0 00:05:55.569 asserts 115 115 115 0 n/a 00:05:55.569 00:05:55.569 Elapsed time = 0.003 seconds 00:05:55.828 00:05:55.828 real 0m0.542s 00:05:55.828 user 0m0.830s 00:05:55.828 sys 0m0.185s 00:05:55.828 14:07:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.828 14:07:25 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:55.828 ************************************ 00:05:55.828 END TEST accel_dif_functional_tests 00:05:55.828 ************************************ 00:05:55.828 14:07:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:55.828 00:05:55.828 real 0m32.437s 00:05:55.828 user 0m36.008s 00:05:55.828 sys 0m4.421s 00:05:55.828 14:07:25 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.828 14:07:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.828 ************************************ 00:05:55.828 END TEST accel 00:05:55.828 ************************************ 00:05:55.828 14:07:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:55.828 14:07:25 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:55.828 14:07:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.828 14:07:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.828 14:07:25 -- common/autotest_common.sh@10 -- # set +x 00:05:56.086 ************************************ 00:05:56.086 START TEST accel_rpc 00:05:56.086 ************************************ 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:56.086 * Looking for test storage... 00:05:56.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:56.086 14:07:25 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:56.086 14:07:25 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=807431 00:05:56.086 14:07:25 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:56.086 14:07:25 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 807431 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 807431 ']' 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.086 14:07:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.086 [2024-07-25 14:07:25.594146] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:56.086 [2024-07-25 14:07:25.594231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807431 ] 00:05:56.086 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.086 [2024-07-25 14:07:25.653255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.346 [2024-07-25 14:07:25.765870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.346 14:07:25 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.346 14:07:25 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:56.346 14:07:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:56.346 14:07:25 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:56.346 14:07:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:56.346 14:07:25 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:56.346 14:07:25 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:56.346 14:07:25 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.346 14:07:25 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.346 14:07:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.346 ************************************ 00:05:56.346 START TEST accel_assign_opcode 00:05:56.346 ************************************ 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.346 [2024-07-25 14:07:25.830503] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.346 [2024-07-25 14:07:25.838527] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.346 14:07:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:56.606 14:07:26 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:56.607 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.607 software 00:05:56.607 00:05:56.607 real 0m0.279s 00:05:56.607 user 0m0.041s 00:05:56.607 sys 0m0.007s 00:05:56.607 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.607 14:07:26 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:56.607 ************************************ 00:05:56.607 END TEST accel_assign_opcode 00:05:56.607 ************************************ 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:56.607 14:07:26 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 807431 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 807431 ']' 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 807431 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807431 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807431' 00:05:56.607 killing process with pid 807431 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@967 -- # kill 807431 00:05:56.607 14:07:26 accel_rpc -- common/autotest_common.sh@972 -- # wait 807431 00:05:57.176 00:05:57.176 real 0m1.108s 00:05:57.176 user 0m1.052s 00:05:57.176 sys 0m0.423s 00:05:57.176 14:07:26 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.176 14:07:26 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 END TEST accel_rpc 00:05:57.176 ************************************ 00:05:57.176 14:07:26 -- common/autotest_common.sh@1142 -- # return 0 00:05:57.176 14:07:26 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:57.176 14:07:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.176 14:07:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.176 14:07:26 -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 ************************************ 00:05:57.176 START TEST app_cmdline 00:05:57.176 ************************************ 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:57.176 * Looking for test storage... 00:05:57.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:57.176 14:07:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:57.176 14:07:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=807637 00:05:57.176 14:07:26 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:57.176 14:07:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 807637 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 807637 ']' 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.176 14:07:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.176 [2024-07-25 14:07:26.756257] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:05:57.176 [2024-07-25 14:07:26.756354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid807637 ] 00:05:57.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.176 [2024-07-25 14:07:26.812784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.436 [2024-07-25 14:07:26.935493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.696 14:07:27 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.696 14:07:27 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:57.696 14:07:27 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:57.954 { 00:05:57.954 "version": "SPDK v24.09-pre git sha1 d3d267b54", 00:05:57.954 "fields": { 00:05:57.954 "major": 24, 00:05:57.954 "minor": 9, 00:05:57.954 "patch": 0, 00:05:57.954 "suffix": "-pre", 00:05:57.954 "commit": "d3d267b54" 00:05:57.954 } 00:05:57.954 } 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:57.954 14:07:27 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:57.954 14:07:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:57.954 14:07:27 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:57.954 14:07:27 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.954 14:07:27 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:57.955 14:07:27 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:58.215 request: 00:05:58.215 { 00:05:58.215 "method": "env_dpdk_get_mem_stats", 00:05:58.215 "req_id": 1 00:05:58.215 } 00:05:58.215 Got JSON-RPC error response 00:05:58.215 response: 00:05:58.215 { 00:05:58.215 "code": -32601, 00:05:58.215 "message": "Method not found" 00:05:58.215 } 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.215 14:07:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 807637 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 807637 ']' 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 807637 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 807637 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 807637' 00:05:58.215 killing process with pid 807637 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@967 -- # kill 807637 00:05:58.215 14:07:27 app_cmdline -- common/autotest_common.sh@972 -- # wait 807637 00:05:58.784 00:05:58.784 real 0m1.571s 00:05:58.784 user 0m1.945s 00:05:58.784 sys 0m0.475s 00:05:58.784 14:07:28 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.784 14:07:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:58.784 ************************************ 00:05:58.784 END TEST app_cmdline 00:05:58.784 ************************************ 00:05:58.784 14:07:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.784 14:07:28 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:58.784 14:07:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.784 14:07:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.784 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.784 ************************************ 00:05:58.784 START TEST version 00:05:58.784 ************************************ 00:05:58.784 14:07:28 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:58.784 * Looking for test storage... 00:05:58.784 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:58.784 14:07:28 version -- app/version.sh@17 -- # get_header_version major 00:05:58.784 14:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # cut -f2 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.784 14:07:28 version -- app/version.sh@17 -- # major=24 00:05:58.784 14:07:28 version -- app/version.sh@18 -- # get_header_version minor 00:05:58.784 14:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # cut -f2 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.784 14:07:28 version -- app/version.sh@18 -- # minor=9 00:05:58.784 14:07:28 version -- app/version.sh@19 -- # get_header_version patch 00:05:58.784 14:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # cut -f2 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.784 14:07:28 version -- app/version.sh@19 -- # patch=0 00:05:58.784 14:07:28 version -- app/version.sh@20 -- # get_header_version suffix 00:05:58.784 14:07:28 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # cut -f2 00:05:58.784 14:07:28 version -- app/version.sh@14 -- # tr -d '"' 00:05:58.784 14:07:28 version -- app/version.sh@20 -- # suffix=-pre 00:05:58.784 14:07:28 version -- app/version.sh@22 -- # version=24.9 00:05:58.784 14:07:28 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:58.784 14:07:28 version -- app/version.sh@28 -- # version=24.9rc0 00:05:58.784 14:07:28 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:58.784 14:07:28 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:58.784 14:07:28 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:58.784 14:07:28 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:58.784 00:05:58.784 real 0m0.109s 00:05:58.784 user 0m0.063s 00:05:58.784 sys 0m0.067s 00:05:58.784 14:07:28 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.784 14:07:28 version -- common/autotest_common.sh@10 -- # set +x 00:05:58.784 ************************************ 00:05:58.784 END TEST version 00:05:58.784 ************************************ 00:05:58.784 14:07:28 -- common/autotest_common.sh@1142 -- # return 0 00:05:58.784 14:07:28 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:58.784 14:07:28 -- spdk/autotest.sh@198 -- # uname -s 00:05:58.784 14:07:28 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:58.784 14:07:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:58.785 14:07:28 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:58.785 14:07:28 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:58.785 14:07:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:58.785 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:05:58.785 14:07:28 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:58.785 14:07:28 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:58.785 14:07:28 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:58.785 14:07:28 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:58.785 14:07:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.785 14:07:28 -- common/autotest_common.sh@10 -- # set +x 00:05:59.044 ************************************ 00:05:59.044 START TEST nvmf_tcp 00:05:59.044 ************************************ 00:05:59.044 14:07:28 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:59.044 * Looking for test storage... 00:05:59.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:59.044 14:07:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:59.044 14:07:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:59.044 14:07:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:59.044 14:07:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:59.044 14:07:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.044 14:07:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:59.044 ************************************ 00:05:59.044 START TEST nvmf_target_core 00:05:59.044 ************************************ 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:59.044 * Looking for test storage... 00:05:59.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.044 14:07:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:59.045 ************************************ 00:05:59.045 START TEST nvmf_abort 00:05:59.045 ************************************ 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:59.045 * Looking for test storage... 00:05:59.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:59.045 14:07:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.618 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:01.618 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:01.618 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:01.618 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:01.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:01.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:01.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:01.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:01.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:01.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:06:01.619 00:06:01.619 --- 10.0.0.2 ping statistics --- 00:06:01.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.619 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:01.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:01.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:06:01.619 00:06:01.619 --- 10.0.0.1 ping statistics --- 00:06:01.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:01.619 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:01.619 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=809687 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 809687 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 809687 ']' 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.620 14:07:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.620 [2024-07-25 14:07:30.959602] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:06:01.620 [2024-07-25 14:07:30.959679] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:01.620 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.620 [2024-07-25 14:07:31.022247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.620 [2024-07-25 14:07:31.123423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:01.620 [2024-07-25 14:07:31.123482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:01.620 [2024-07-25 14:07:31.123511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:01.620 [2024-07-25 14:07:31.123522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:01.620 [2024-07-25 14:07:31.123531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:01.620 [2024-07-25 14:07:31.123617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.620 [2024-07-25 14:07:31.123686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.620 [2024-07-25 14:07:31.123688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:01.620 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 [2024-07-25 14:07:31.275920] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 Malloc0 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 Delay0 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 [2024-07-25 14:07:31.346259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.879 14:07:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:01.879 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.879 [2024-07-25 14:07:31.493170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:04.420 Initializing NVMe Controllers 00:06:04.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.420 controller IO queue size 128 less than required 00:06:04.420 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:04.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:04.420 Initialization complete. Launching workers. 00:06:04.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33163 00:06:04.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33224, failed to submit 62 00:06:04.420 success 33167, unsuccess 57, failed 0 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:04.420 rmmod nvme_tcp 00:06:04.420 rmmod nvme_fabrics 00:06:04.420 rmmod nvme_keyring 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 809687 ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 809687 ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 809687' 00:06:04.420 killing process with pid 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 809687 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.420 14:07:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:06.328 00:06:06.328 real 0m7.347s 00:06:06.328 user 0m10.600s 00:06:06.328 sys 0m2.521s 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.328 ************************************ 00:06:06.328 END TEST nvmf_abort 00:06:06.328 ************************************ 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.328 14:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.587 ************************************ 00:06:06.587 START TEST nvmf_ns_hotplug_stress 00:06:06.587 ************************************ 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.588 * Looking for test storage... 00:06:06.588 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:06.588 14:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:08.496 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:08.496 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:08.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:08.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:08.496 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:08.497 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:08.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:08.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:06:08.757 00:06:08.757 --- 10.0.0.2 ping statistics --- 00:06:08.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.757 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:08.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:08.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:06:08.757 00:06:08.757 --- 10.0.0.1 ping statistics --- 00:06:08.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:08.757 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=811951 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 811951 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 811951 ']' 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.757 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.757 [2024-07-25 14:07:38.307532] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:06:08.757 [2024-07-25 14:07:38.307610] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:08.757 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.757 [2024-07-25 14:07:38.373421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.016 [2024-07-25 14:07:38.483374] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.016 [2024-07-25 14:07:38.483438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.016 [2024-07-25 14:07:38.483467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.016 [2024-07-25 14:07:38.483478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.016 [2024-07-25 14:07:38.483488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.016 [2024-07-25 14:07:38.483546] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.016 [2024-07-25 14:07:38.483665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.016 [2024-07-25 14:07:38.483669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:09.016 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:09.273 [2024-07-25 14:07:38.835984] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.273 14:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.532 14:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.790 [2024-07-25 14:07:39.387834] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.790 14:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.048 14:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:10.305 Malloc0 00:06:10.305 14:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.563 Delay0 00:06:10.563 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.821 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:11.078 NULL1 00:06:11.078 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:11.336 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=812336 00:06:11.336 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:11.336 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:11.336 14:07:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.713 Read completed with error (sct=0, sc=11) 00:06:12.713 14:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.713 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.971 14:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:12.971 14:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:13.228 true 00:06:13.228 14:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:13.228 14:07:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.795 14:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.053 14:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:14.053 14:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:14.619 true 00:06:14.619 14:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:14.619 14:07:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.619 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.877 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:14.877 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:15.136 true 00:06:15.136 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:15.136 14:07:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.394 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.651 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:15.651 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:15.909 true 00:06:15.909 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:15.909 14:07:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.287 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.287 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:17.287 14:07:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:17.545 true 00:06:17.545 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:17.545 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.803 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.060 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:18.060 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:18.318 true 00:06:18.318 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:18.318 14:07:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.287 14:07:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.287 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.545 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:19.545 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:19.803 true 00:06:19.803 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:19.803 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.061 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.319 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:20.319 14:07:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:20.577 true 00:06:20.577 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:20.577 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.513 14:07:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.769 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.769 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:22.025 true 00:06:22.025 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:22.025 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.282 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.538 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:22.538 14:07:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.796 true 00:06:22.796 14:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:22.796 14:07:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.734 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.734 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.734 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:23.734 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:23.991 true 00:06:23.991 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:23.991 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.254 14:07:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.513 14:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:24.513 14:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:24.772 true 00:06:24.772 14:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:24.772 14:07:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.710 14:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.968 14:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:25.968 14:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:26.226 true 00:06:26.226 14:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:26.226 14:07:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.483 14:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.740 14:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:26.740 14:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:26.997 true 00:06:26.997 14:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:26.997 14:07:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.932 14:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.932 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.190 14:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:28.190 14:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:28.447 true 00:06:28.447 14:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:28.447 14:07:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.705 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.963 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:28.963 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:29.220 true 00:06:29.220 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:29.220 14:07:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.157 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.157 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.415 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:30.415 14:07:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:30.674 true 00:06:30.674 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:30.674 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.931 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.189 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:31.189 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:31.447 true 00:06:31.447 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:31.447 14:08:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.386 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.386 14:08:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.644 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:32.644 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:32.902 true 00:06:32.902 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:32.902 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.160 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.417 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:33.417 14:08:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:33.676 true 00:06:33.676 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:33.676 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.932 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.189 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:34.189 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:34.447 true 00:06:34.447 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:34.447 14:08:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.411 14:08:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.670 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:35.670 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:35.927 true 00:06:35.927 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:35.927 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.186 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.444 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:36.444 14:08:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:36.702 true 00:06:36.702 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:36.702 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.960 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.218 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:37.218 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:37.476 true 00:06:37.476 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:37.476 14:08:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.671 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.671 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.671 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:38.671 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:38.929 true 00:06:38.929 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:38.929 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.495 14:08:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.495 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:39.495 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:39.753 true 00:06:39.753 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:39.753 14:08:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.690 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.949 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:40.949 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:41.207 true 00:06:41.207 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:41.207 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.464 14:08:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.723 Initializing NVMe Controllers 00:06:41.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:41.723 Controller IO queue size 128, less than required. 00:06:41.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.723 Controller IO queue size 128, less than required. 00:06:41.723 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:41.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:41.723 Initialization complete. Launching workers. 00:06:41.723 ======================================================== 00:06:41.723 Latency(us) 00:06:41.723 Device Information : IOPS MiB/s Average min max 00:06:41.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1006.70 0.49 63337.34 2423.19 1014942.83 00:06:41.723 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10473.65 5.11 12220.81 2938.42 538031.37 00:06:41.723 ======================================================== 00:06:41.723 Total : 11480.35 5.61 16703.17 2423.19 1014942.83 00:06:41.723 00:06:41.723 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:41.723 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:41.981 true 00:06:41.981 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 812336 00:06:41.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (812336) - No such process 00:06:41.981 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 812336 00:06:41.981 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.239 14:08:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.496 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:42.496 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:42.497 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:42.497 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.497 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:42.754 null0 00:06:42.754 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.754 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.754 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:43.012 null1 00:06:43.012 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.012 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.012 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:43.270 null2 00:06:43.270 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.270 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.270 14:08:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:43.528 null3 00:06:43.528 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.528 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.528 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:43.788 null4 00:06:43.788 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.788 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.788 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:44.047 null5 00:06:44.047 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.047 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.047 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:44.304 null6 00:06:44.304 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.304 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.304 14:08:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:44.565 null7 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 816275 816276 816278 816280 816283 816285 816287 816291 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.565 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.823 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.081 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.339 14:08:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.597 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.598 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.856 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.114 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.373 14:08:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.631 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.890 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.149 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.408 14:08:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.666 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.924 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.925 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.925 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.183 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.441 14:08:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.700 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.958 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.216 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.217 14:08:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.474 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.732 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.991 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.991 rmmod nvme_tcp 00:06:49.991 rmmod nvme_fabrics 00:06:49.991 rmmod nvme_keyring 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 811951 ']' 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 811951 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 811951 ']' 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 811951 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 811951 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 811951' 00:06:50.284 killing process with pid 811951 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 811951 00:06:50.284 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 811951 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.546 14:08:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.451 00:06:52.451 real 0m46.006s 00:06:52.451 user 3m30.448s 00:06:52.451 sys 0m16.154s 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 ************************************ 00:06:52.451 END TEST nvmf_ns_hotplug_stress 00:06:52.451 ************************************ 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.451 ************************************ 00:06:52.451 START TEST nvmf_delete_subsystem 00:06:52.451 ************************************ 00:06:52.451 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:52.711 * Looking for test storage... 00:06:52.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:52.711 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.712 14:08:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.618 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:54.619 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:54.619 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:54.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:54.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.619 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:06:54.878 00:06:54.878 --- 10.0.0.2 ping statistics --- 00:06:54.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.878 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:06:54.878 00:06:54.878 --- 10.0.0.1 ping statistics --- 00:06:54.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.878 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=819161 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 819161 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 819161 ']' 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.878 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.878 [2024-07-25 14:08:24.393838] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:06:54.878 [2024-07-25 14:08:24.393907] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.878 [2024-07-25 14:08:24.455682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.136 [2024-07-25 14:08:24.564871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.136 [2024-07-25 14:08:24.564946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.136 [2024-07-25 14:08:24.564974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.136 [2024-07-25 14:08:24.564986] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.136 [2024-07-25 14:08:24.564995] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.136 [2024-07-25 14:08:24.565086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.136 [2024-07-25 14:08:24.565092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 [2024-07-25 14:08:24.709614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 [2024-07-25 14:08:24.725842] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 NULL1 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 Delay0 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=819183 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:55.136 14:08:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.136 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.394 [2024-07-25 14:08:24.800552] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:57.296 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.296 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.296 14:08:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 [2024-07-25 14:08:27.099913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f308400d330 is same with the state(5) to be set 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.556 starting I/O failed: -6 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Read completed with error (sct=0, sc=8) 00:06:57.556 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 starting I/O failed: -6 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 [2024-07-25 14:08:27.100841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc03e0 is same with the state(5) to be set 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:57.557 Write completed with error (sct=0, sc=8) 00:06:57.557 Read completed with error (sct=0, sc=8) 00:06:58.496 [2024-07-25 14:08:28.064286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc1ac0 is same with the state(5) to be set 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Write completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 [2024-07-25 14:08:28.099521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc05c0 is same with the state(5) to be set 00:06:58.496 Write completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Write completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Read completed with error (sct=0, sc=8) 00:06:58.496 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 [2024-07-25 14:08:28.099714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc0c20 is same with the state(5) to be set 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 [2024-07-25 14:08:28.103922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f308400d660 is same with the state(5) to be set 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Read completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 Write completed with error (sct=0, sc=8) 00:06:58.497 [2024-07-25 14:08:28.104607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f308400d000 is same with the state(5) to be set 00:06:58.497 Initializing NVMe Controllers 00:06:58.497 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.497 Controller IO queue size 128, less than required. 00:06:58.497 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.497 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.497 Initialization complete. Launching workers. 00:06:58.497 ======================================================== 00:06:58.497 Latency(us) 00:06:58.497 Device Information : IOPS MiB/s Average min max 00:06:58.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.75 0.08 905958.50 397.57 1012204.32 00:06:58.497 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.39 0.07 955809.30 393.05 1044821.31 00:06:58.497 ======================================================== 00:06:58.497 Total : 311.14 0.15 929413.02 393.05 1044821.31 00:06:58.497 00:06:58.497 [2024-07-25 14:08:28.105154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc1ac0 (9): Bad file descriptor 00:06:58.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:58.497 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.497 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:58.497 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 819183 00:06:58.497 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 819183 00:06:59.124 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (819183) - No such process 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 819183 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 819183 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 819183 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.124 [2024-07-25 14:08:28.626738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=819600 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:06:59.124 14:08:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.124 [2024-07-25 14:08:28.685074] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:59.691 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.691 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:06:59.691 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.261 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.261 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:00.261 14:08:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.520 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.520 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:00.520 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.090 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.090 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:01.090 14:08:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.657 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.657 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:01.657 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.225 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.225 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:02.225 14:08:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.225 Initializing NVMe Controllers 00:07:02.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.225 Controller IO queue size 128, less than required. 00:07:02.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.225 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.225 Initialization complete. Launching workers. 00:07:02.225 ======================================================== 00:07:02.225 Latency(us) 00:07:02.225 Device Information : IOPS MiB/s Average min max 00:07:02.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004677.50 1000141.10 1043203.99 00:07:02.225 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004457.90 1000194.15 1042241.84 00:07:02.225 ======================================================== 00:07:02.225 Total : 256.00 0.12 1004567.70 1000141.10 1043203.99 00:07:02.225 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 819600 00:07:02.792 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (819600) - No such process 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 819600 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.792 rmmod nvme_tcp 00:07:02.792 rmmod nvme_fabrics 00:07:02.792 rmmod nvme_keyring 00:07:02.792 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 819161 ']' 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 819161 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 819161 ']' 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 819161 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 819161 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 819161' 00:07:02.793 killing process with pid 819161 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 819161 00:07:02.793 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 819161 00:07:03.052 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.052 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.052 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.052 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.052 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.053 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.053 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.053 14:08:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.956 00:07:04.956 real 0m12.487s 00:07:04.956 user 0m28.144s 00:07:04.956 sys 0m3.074s 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:04.956 ************************************ 00:07:04.956 END TEST nvmf_delete_subsystem 00:07:04.956 ************************************ 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.956 ************************************ 00:07:04.956 START TEST nvmf_host_management 00:07:04.956 ************************************ 00:07:04.956 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.214 * Looking for test storage... 00:07:05.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.214 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.214 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:05.214 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.215 14:08:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.118 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.118 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.118 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.118 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.119 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.119 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:07.377 00:07:07.377 --- 10.0.0.2 ping statistics --- 00:07:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.377 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:07.377 00:07:07.377 --- 10.0.0.1 ping statistics --- 00:07:07.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.377 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=822055 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 822055 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 822055 ']' 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.377 14:08:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.377 [2024-07-25 14:08:36.908409] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:07.377 [2024-07-25 14:08:36.908504] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.377 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.377 [2024-07-25 14:08:36.970696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.635 [2024-07-25 14:08:37.073173] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.635 [2024-07-25 14:08:37.073226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.635 [2024-07-25 14:08:37.073249] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.635 [2024-07-25 14:08:37.073260] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.635 [2024-07-25 14:08:37.073270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.635 [2024-07-25 14:08:37.073337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.635 [2024-07-25 14:08:37.073407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.635 [2024-07-25 14:08:37.073540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.635 [2024-07-25 14:08:37.073543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.635 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.635 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:07.635 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.635 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.635 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 [2024-07-25 14:08:37.236556] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.636 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.636 Malloc0 00:07:07.894 [2024-07-25 14:08:37.297719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=822103 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 822103 /var/tmp/bdevperf.sock 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 822103 ']' 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:07.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:07.894 { 00:07:07.894 "params": { 00:07:07.894 "name": "Nvme$subsystem", 00:07:07.894 "trtype": "$TEST_TRANSPORT", 00:07:07.894 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:07.894 "adrfam": "ipv4", 00:07:07.894 "trsvcid": "$NVMF_PORT", 00:07:07.894 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:07.894 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:07.894 "hdgst": ${hdgst:-false}, 00:07:07.894 "ddgst": ${ddgst:-false} 00:07:07.894 }, 00:07:07.894 "method": "bdev_nvme_attach_controller" 00:07:07.894 } 00:07:07.894 EOF 00:07:07.894 )") 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:07.894 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:07.894 "params": { 00:07:07.894 "name": "Nvme0", 00:07:07.894 "trtype": "tcp", 00:07:07.894 "traddr": "10.0.0.2", 00:07:07.894 "adrfam": "ipv4", 00:07:07.894 "trsvcid": "4420", 00:07:07.894 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:07.894 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:07.894 "hdgst": false, 00:07:07.894 "ddgst": false 00:07:07.894 }, 00:07:07.894 "method": "bdev_nvme_attach_controller" 00:07:07.894 }' 00:07:07.894 [2024-07-25 14:08:37.377477] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:07.894 [2024-07-25 14:08:37.377551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822103 ] 00:07:07.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.894 [2024-07-25 14:08:37.438892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.151 [2024-07-25 14:08:37.548570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.411 Running I/O for 10 seconds... 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:08.411 14:08:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:08.671 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:08.672 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.672 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.672 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.672 [2024-07-25 14:08:38.289251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.289972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.289986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.672 [2024-07-25 14:08:38.290453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.672 [2024-07-25 14:08:38.290468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.290983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.290997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.673 [2024-07-25 14:08:38.291320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.673 [2024-07-25 14:08:38.291425] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c135a0 was disconnected and freed. reset controller. 00:07:08.673 [2024-07-25 14:08:38.292594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.673 task offset: 86400 on job bdev=Nvme0n1 fails 00:07:08.673 00:07:08.673 Latency(us) 00:07:08.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.673 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.673 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:08.673 Verification LBA range: start 0x0 length 0x400 00:07:08.673 Nvme0n1 : 0.40 1605.57 100.35 160.56 0.00 35186.13 2779.21 33787.45 00:07:08.673 =================================================================================================================== 00:07:08.673 Total : 1605.57 100.35 160.56 0.00 35186.13 2779.21 33787.45 00:07:08.673 [2024-07-25 14:08:38.294512] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.673 [2024-07-25 14:08:38.294542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1802790 (9): Bad file descriptor 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.673 14:08:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:08.931 [2024-07-25 14:08:38.343270] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 822103 00:07:09.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (822103) - No such process 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:09.875 { 00:07:09.875 "params": { 00:07:09.875 "name": "Nvme$subsystem", 00:07:09.875 "trtype": "$TEST_TRANSPORT", 00:07:09.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.875 "adrfam": "ipv4", 00:07:09.875 "trsvcid": "$NVMF_PORT", 00:07:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.875 "hdgst": ${hdgst:-false}, 00:07:09.875 "ddgst": ${ddgst:-false} 00:07:09.875 }, 00:07:09.875 "method": "bdev_nvme_attach_controller" 00:07:09.875 } 00:07:09.875 EOF 00:07:09.875 )") 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:09.875 14:08:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:09.875 "params": { 00:07:09.875 "name": "Nvme0", 00:07:09.875 "trtype": "tcp", 00:07:09.875 "traddr": "10.0.0.2", 00:07:09.875 "adrfam": "ipv4", 00:07:09.875 "trsvcid": "4420", 00:07:09.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.875 "hdgst": false, 00:07:09.875 "ddgst": false 00:07:09.875 }, 00:07:09.875 "method": "bdev_nvme_attach_controller" 00:07:09.875 }' 00:07:09.875 [2024-07-25 14:08:39.351859] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:09.875 [2024-07-25 14:08:39.351931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid822379 ] 00:07:09.875 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.875 [2024-07-25 14:08:39.412520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.875 [2024-07-25 14:08:39.524271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.134 Running I/O for 1 seconds... 00:07:11.076 00:07:11.076 Latency(us) 00:07:11.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:11.076 Verification LBA range: start 0x0 length 0x400 00:07:11.076 Nvme0n1 : 1.01 1712.28 107.02 0.00 0.00 36760.83 5922.51 32622.36 00:07:11.076 =================================================================================================================== 00:07:11.076 Total : 1712.28 107.02 0.00 0.00 36760.83 5922.51 32622.36 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.645 14:08:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.645 rmmod nvme_tcp 00:07:11.645 rmmod nvme_fabrics 00:07:11.645 rmmod nvme_keyring 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 822055 ']' 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 822055 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 822055 ']' 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 822055 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 822055 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 822055' 00:07:11.645 killing process with pid 822055 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 822055 00:07:11.645 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 822055 00:07:11.907 [2024-07-25 14:08:41.317934] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.907 14:08:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.819 00:07:13.819 real 0m8.795s 00:07:13.819 user 0m20.036s 00:07:13.819 sys 0m2.640s 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.819 ************************************ 00:07:13.819 END TEST nvmf_host_management 00:07:13.819 ************************************ 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.819 ************************************ 00:07:13.819 START TEST nvmf_lvol 00:07:13.819 ************************************ 00:07:13.819 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.078 * Looking for test storage... 00:07:14.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.078 14:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:15.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:15.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.989 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:15.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:15.990 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.990 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.249 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.249 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:07:16.249 00:07:16.249 --- 10.0.0.2 ping statistics --- 00:07:16.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.249 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.249 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.249 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:07:16.249 00:07:16.249 --- 10.0.0.1 ping statistics --- 00:07:16.249 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.249 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=824547 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 824547 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 824547 ']' 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.249 14:08:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.249 [2024-07-25 14:08:45.818892] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:16.249 [2024-07-25 14:08:45.818981] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.249 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.249 [2024-07-25 14:08:45.881125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.508 [2024-07-25 14:08:45.988320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.508 [2024-07-25 14:08:45.988389] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.508 [2024-07-25 14:08:45.988402] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.508 [2024-07-25 14:08:45.988421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.508 [2024-07-25 14:08:45.988430] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.508 [2024-07-25 14:08:45.988511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.508 [2024-07-25 14:08:45.988579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.508 [2024-07-25 14:08:45.988582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:16.508 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:16.767 [2024-07-25 14:08:46.357427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:16.767 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:17.338 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:17.338 14:08:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:17.597 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:17.598 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:17.856 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:18.115 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4469d219-4272-43bf-b20f-d8d1404d0861 00:07:18.115 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4469d219-4272-43bf-b20f-d8d1404d0861 lvol 20 00:07:18.374 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1461fe34-3f09-4d3c-8639-535d1451286a 00:07:18.374 14:08:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:18.633 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1461fe34-3f09-4d3c-8639-535d1451286a 00:07:18.893 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:18.893 [2024-07-25 14:08:48.540396] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.154 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:19.154 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=824894 00:07:19.154 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:19.154 14:08:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:19.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.352 14:08:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1461fe34-3f09-4d3c-8639-535d1451286a MY_SNAPSHOT 00:07:20.610 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5297daca-e7aa-4cd6-8976-fb700e6c3dcb 00:07:20.610 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1461fe34-3f09-4d3c-8639-535d1451286a 30 00:07:20.868 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5297daca-e7aa-4cd6-8976-fb700e6c3dcb MY_CLONE 00:07:21.439 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=78f79f60-697c-4e6d-b393-0f18d5cb598a 00:07:21.439 14:08:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 78f79f60-697c-4e6d-b393-0f18d5cb598a 00:07:22.006 14:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 824894 00:07:30.152 Initializing NVMe Controllers 00:07:30.152 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:30.152 Controller IO queue size 128, less than required. 00:07:30.152 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:30.152 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:30.152 Initialization complete. Launching workers. 00:07:30.152 ======================================================== 00:07:30.152 Latency(us) 00:07:30.152 Device Information : IOPS MiB/s Average min max 00:07:30.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10317.80 40.30 12407.70 670.88 92881.49 00:07:30.152 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10559.00 41.25 12128.61 1969.29 72192.55 00:07:30.152 ======================================================== 00:07:30.152 Total : 20876.80 81.55 12266.54 670.88 92881.49 00:07:30.152 00:07:30.152 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.152 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1461fe34-3f09-4d3c-8639-535d1451286a 00:07:30.153 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4469d219-4272-43bf-b20f-d8d1404d0861 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:30.412 rmmod nvme_tcp 00:07:30.412 rmmod nvme_fabrics 00:07:30.412 rmmod nvme_keyring 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 824547 ']' 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 824547 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 824547 ']' 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 824547 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 824547 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 824547' 00:07:30.412 killing process with pid 824547 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 824547 00:07:30.412 14:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 824547 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.671 14:09:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:33.211 00:07:33.211 real 0m18.880s 00:07:33.211 user 1m3.501s 00:07:33.211 sys 0m5.830s 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 ************************************ 00:07:33.211 END TEST nvmf_lvol 00:07:33.211 ************************************ 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:33.211 ************************************ 00:07:33.211 START TEST nvmf_lvs_grow 00:07:33.211 ************************************ 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:33.211 * Looking for test storage... 00:07:33.211 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.211 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:33.212 14:09:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:35.115 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:35.115 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:35.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:35.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:35.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:35.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:07:35.115 00:07:35.115 --- 10.0.0.2 ping statistics --- 00:07:35.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.115 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:35.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:35.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:35.115 00:07:35.115 --- 10.0.0.1 ping statistics --- 00:07:35.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:35.115 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:35.115 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=828280 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 828280 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 828280 ']' 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.116 14:09:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.116 [2024-07-25 14:09:04.754164] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:35.116 [2024-07-25 14:09:04.754241] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.374 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.374 [2024-07-25 14:09:04.818401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.374 [2024-07-25 14:09:04.925702] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.374 [2024-07-25 14:09:04.925749] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.374 [2024-07-25 14:09:04.925780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.374 [2024-07-25 14:09:04.925792] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.374 [2024-07-25 14:09:04.925803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.374 [2024-07-25 14:09:04.925853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:35.632 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:35.890 [2024-07-25 14:09:05.322210] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:35.890 ************************************ 00:07:35.890 START TEST lvs_grow_clean 00:07:35.890 ************************************ 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:35.890 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.148 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:36.148 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:36.406 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8ff97002-b762-4455-b533-c5ed0591dfea 00:07:36.406 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:36.406 14:09:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:36.666 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:36.666 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:36.666 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8ff97002-b762-4455-b533-c5ed0591dfea lvol 150 00:07:36.923 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe 00:07:36.923 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.923 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.182 [2024-07-25 14:09:06.699519] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.183 [2024-07-25 14:09:06.699596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.183 true 00:07:37.183 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:37.183 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:37.443 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:37.443 14:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.701 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe 00:07:37.960 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.219 [2024-07-25 14:09:07.762784] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.219 14:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=829033 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 829033 /var/tmp/bdevperf.sock 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 829033 ']' 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.477 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:38.477 [2024-07-25 14:09:08.065834] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:38.478 [2024-07-25 14:09:08.065921] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid829033 ] 00:07:38.478 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.478 [2024-07-25 14:09:08.125463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.748 [2024-07-25 14:09:08.234904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.748 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:38.748 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:38.748 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.007 Nvme0n1 00:07:39.007 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.266 [ 00:07:39.266 { 00:07:39.266 "name": "Nvme0n1", 00:07:39.266 "aliases": [ 00:07:39.266 "0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe" 00:07:39.266 ], 00:07:39.266 "product_name": "NVMe disk", 00:07:39.266 "block_size": 4096, 00:07:39.266 "num_blocks": 38912, 00:07:39.266 "uuid": "0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe", 00:07:39.266 "assigned_rate_limits": { 00:07:39.266 "rw_ios_per_sec": 0, 00:07:39.266 "rw_mbytes_per_sec": 0, 00:07:39.266 "r_mbytes_per_sec": 0, 00:07:39.266 "w_mbytes_per_sec": 0 00:07:39.266 }, 00:07:39.266 "claimed": false, 00:07:39.266 "zoned": false, 00:07:39.266 "supported_io_types": { 00:07:39.266 "read": true, 00:07:39.266 "write": true, 00:07:39.266 "unmap": true, 00:07:39.266 "flush": true, 00:07:39.266 "reset": true, 00:07:39.266 "nvme_admin": true, 00:07:39.266 "nvme_io": true, 00:07:39.266 "nvme_io_md": false, 00:07:39.266 "write_zeroes": true, 00:07:39.266 "zcopy": false, 00:07:39.266 "get_zone_info": false, 00:07:39.266 "zone_management": false, 00:07:39.266 "zone_append": false, 00:07:39.266 "compare": true, 00:07:39.266 "compare_and_write": true, 00:07:39.266 "abort": true, 00:07:39.266 "seek_hole": false, 00:07:39.266 "seek_data": false, 00:07:39.266 "copy": true, 00:07:39.266 "nvme_iov_md": false 00:07:39.266 }, 00:07:39.266 "memory_domains": [ 00:07:39.266 { 00:07:39.266 "dma_device_id": "system", 00:07:39.266 "dma_device_type": 1 00:07:39.266 } 00:07:39.266 ], 00:07:39.266 "driver_specific": { 00:07:39.266 "nvme": [ 00:07:39.266 { 00:07:39.266 "trid": { 00:07:39.266 "trtype": "TCP", 00:07:39.266 "adrfam": "IPv4", 00:07:39.266 "traddr": "10.0.0.2", 00:07:39.266 "trsvcid": "4420", 00:07:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.266 }, 00:07:39.266 "ctrlr_data": { 00:07:39.266 "cntlid": 1, 00:07:39.266 "vendor_id": "0x8086", 00:07:39.266 "model_number": "SPDK bdev Controller", 00:07:39.266 "serial_number": "SPDK0", 00:07:39.266 "firmware_revision": "24.09", 00:07:39.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.266 "oacs": { 00:07:39.266 "security": 0, 00:07:39.266 "format": 0, 00:07:39.266 "firmware": 0, 00:07:39.266 "ns_manage": 0 00:07:39.266 }, 00:07:39.266 "multi_ctrlr": true, 00:07:39.266 "ana_reporting": false 00:07:39.266 }, 00:07:39.266 "vs": { 00:07:39.266 "nvme_version": "1.3" 00:07:39.266 }, 00:07:39.266 "ns_data": { 00:07:39.266 "id": 1, 00:07:39.266 "can_share": true 00:07:39.266 } 00:07:39.266 } 00:07:39.266 ], 00:07:39.266 "mp_policy": "active_passive" 00:07:39.266 } 00:07:39.266 } 00:07:39.266 ] 00:07:39.266 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=829323 00:07:39.266 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:39.266 14:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.525 Running I/O for 10 seconds... 00:07:40.461 Latency(us) 00:07:40.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.461 Nvme0n1 : 1.00 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:07:40.461 =================================================================================================================== 00:07:40.461 Total : 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:07:40.461 00:07:41.398 14:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:41.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.398 Nvme0n1 : 2.00 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:07:41.398 =================================================================================================================== 00:07:41.398 Total : 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:07:41.398 00:07:41.655 true 00:07:41.655 14:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:41.655 14:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:41.915 14:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:41.915 14:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:41.915 14:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 829323 00:07:42.485 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.485 Nvme0n1 : 3.00 15929.00 62.22 0.00 0.00 0.00 0.00 0.00 00:07:42.485 =================================================================================================================== 00:07:42.485 Total : 15929.00 62.22 0.00 0.00 0.00 0.00 0.00 00:07:42.485 00:07:43.425 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.425 Nvme0n1 : 4.00 16037.00 62.64 0.00 0.00 0.00 0.00 0.00 00:07:43.425 =================================================================================================================== 00:07:43.425 Total : 16037.00 62.64 0.00 0.00 0.00 0.00 0.00 00:07:43.425 00:07:44.803 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.803 Nvme0n1 : 5.00 16131.60 63.01 0.00 0.00 0.00 0.00 0.00 00:07:44.803 =================================================================================================================== 00:07:44.803 Total : 16131.60 63.01 0.00 0.00 0.00 0.00 0.00 00:07:44.803 00:07:45.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.738 Nvme0n1 : 6.00 16215.83 63.34 0.00 0.00 0.00 0.00 0.00 00:07:45.738 =================================================================================================================== 00:07:45.738 Total : 16215.83 63.34 0.00 0.00 0.00 0.00 0.00 00:07:45.738 00:07:46.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.704 Nvme0n1 : 7.00 16260.29 63.52 0.00 0.00 0.00 0.00 0.00 00:07:46.704 =================================================================================================================== 00:07:46.704 Total : 16260.29 63.52 0.00 0.00 0.00 0.00 0.00 00:07:46.704 00:07:47.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.645 Nvme0n1 : 8.00 16307.38 63.70 0.00 0.00 0.00 0.00 0.00 00:07:47.645 =================================================================================================================== 00:07:47.645 Total : 16307.38 63.70 0.00 0.00 0.00 0.00 0.00 00:07:47.645 00:07:48.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.585 Nvme0n1 : 9.00 16344.00 63.84 0.00 0.00 0.00 0.00 0.00 00:07:48.585 =================================================================================================================== 00:07:48.585 Total : 16344.00 63.84 0.00 0.00 0.00 0.00 0.00 00:07:48.585 00:07:49.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.522 Nvme0n1 : 10.00 16373.30 63.96 0.00 0.00 0.00 0.00 0.00 00:07:49.522 =================================================================================================================== 00:07:49.522 Total : 16373.30 63.96 0.00 0.00 0.00 0.00 0.00 00:07:49.522 00:07:49.522 00:07:49.522 Latency(us) 00:07:49.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.522 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.522 Nvme0n1 : 10.01 16372.72 63.96 0.00 0.00 7813.14 4271.98 19418.07 00:07:49.522 =================================================================================================================== 00:07:49.522 Total : 16372.72 63.96 0.00 0.00 7813.14 4271.98 19418.07 00:07:49.522 0 00:07:49.522 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 829033 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 829033 ']' 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 829033 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 829033 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 829033' 00:07:49.523 killing process with pid 829033 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 829033 00:07:49.523 Received shutdown signal, test time was about 10.000000 seconds 00:07:49.523 00:07:49.523 Latency(us) 00:07:49.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.523 =================================================================================================================== 00:07:49.523 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:49.523 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 829033 00:07:49.781 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.063 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:50.321 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:50.321 14:09:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:50.581 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:50.581 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:50.581 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.840 [2024-07-25 14:09:20.394806] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:50.840 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:51.098 request: 00:07:51.098 { 00:07:51.098 "uuid": "8ff97002-b762-4455-b533-c5ed0591dfea", 00:07:51.098 "method": "bdev_lvol_get_lvstores", 00:07:51.098 "req_id": 1 00:07:51.098 } 00:07:51.098 Got JSON-RPC error response 00:07:51.098 response: 00:07:51.098 { 00:07:51.098 "code": -19, 00:07:51.098 "message": "No such device" 00:07:51.098 } 00:07:51.098 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:07:51.098 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:51.098 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:51.098 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:51.098 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:51.356 aio_bdev 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:51.356 14:09:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:51.921 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe -t 2000 00:07:51.921 [ 00:07:51.921 { 00:07:51.921 "name": "0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe", 00:07:51.921 "aliases": [ 00:07:51.921 "lvs/lvol" 00:07:51.921 ], 00:07:51.921 "product_name": "Logical Volume", 00:07:51.921 "block_size": 4096, 00:07:51.921 "num_blocks": 38912, 00:07:51.921 "uuid": "0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe", 00:07:51.921 "assigned_rate_limits": { 00:07:51.921 "rw_ios_per_sec": 0, 00:07:51.921 "rw_mbytes_per_sec": 0, 00:07:51.921 "r_mbytes_per_sec": 0, 00:07:51.921 "w_mbytes_per_sec": 0 00:07:51.921 }, 00:07:51.921 "claimed": false, 00:07:51.921 "zoned": false, 00:07:51.921 "supported_io_types": { 00:07:51.921 "read": true, 00:07:51.921 "write": true, 00:07:51.921 "unmap": true, 00:07:51.921 "flush": false, 00:07:51.921 "reset": true, 00:07:51.921 "nvme_admin": false, 00:07:51.921 "nvme_io": false, 00:07:51.921 "nvme_io_md": false, 00:07:51.921 "write_zeroes": true, 00:07:51.921 "zcopy": false, 00:07:51.921 "get_zone_info": false, 00:07:51.921 "zone_management": false, 00:07:51.921 "zone_append": false, 00:07:51.921 "compare": false, 00:07:51.921 "compare_and_write": false, 00:07:51.921 "abort": false, 00:07:51.921 "seek_hole": true, 00:07:51.921 "seek_data": true, 00:07:51.921 "copy": false, 00:07:51.921 "nvme_iov_md": false 00:07:51.921 }, 00:07:51.921 "driver_specific": { 00:07:51.921 "lvol": { 00:07:51.921 "lvol_store_uuid": "8ff97002-b762-4455-b533-c5ed0591dfea", 00:07:51.921 "base_bdev": "aio_bdev", 00:07:51.921 "thin_provision": false, 00:07:51.921 "num_allocated_clusters": 38, 00:07:51.921 "snapshot": false, 00:07:51.921 "clone": false, 00:07:51.921 "esnap_clone": false 00:07:51.921 } 00:07:51.921 } 00:07:51.921 } 00:07:51.921 ] 00:07:51.921 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:07:51.921 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:51.921 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:52.179 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:52.179 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:52.179 14:09:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:52.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:52.439 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0224b518-2b87-4f9c-9b6a-7a9f1a01a5fe 00:07:52.699 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8ff97002-b762-4455-b533-c5ed0591dfea 00:07:52.958 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.216 00:07:53.216 real 0m17.408s 00:07:53.216 user 0m16.791s 00:07:53.216 sys 0m1.905s 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:53.216 ************************************ 00:07:53.216 END TEST lvs_grow_clean 00:07:53.216 ************************************ 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:53.216 ************************************ 00:07:53.216 START TEST lvs_grow_dirty 00:07:53.216 ************************************ 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.216 14:09:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.784 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:53.784 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.784 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:07:53.784 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:07:53.784 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:54.042 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:54.042 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:54.042 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b lvol 150 00:07:54.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:07:54.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.301 14:09:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:54.559 [2024-07-25 14:09:24.120192] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:54.559 [2024-07-25 14:09:24.120288] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:54.559 true 00:07:54.559 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:07:54.559 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:54.819 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:54.819 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.079 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:07:55.338 14:09:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.596 [2024-07-25 14:09:25.095242] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.596 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=831397 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 831397 /var/tmp/bdevperf.sock 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 831397 ']' 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:55.855 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:55.855 [2024-07-25 14:09:25.446811] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:07:55.855 [2024-07-25 14:09:25.446898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831397 ] 00:07:55.855 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.855 [2024-07-25 14:09:25.503836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.112 [2024-07-25 14:09:25.609561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.112 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.112 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:56.112 14:09:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:56.680 Nvme0n1 00:07:56.680 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:56.938 [ 00:07:56.938 { 00:07:56.938 "name": "Nvme0n1", 00:07:56.938 "aliases": [ 00:07:56.938 "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0" 00:07:56.938 ], 00:07:56.938 "product_name": "NVMe disk", 00:07:56.938 "block_size": 4096, 00:07:56.938 "num_blocks": 38912, 00:07:56.938 "uuid": "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0", 00:07:56.938 "assigned_rate_limits": { 00:07:56.938 "rw_ios_per_sec": 0, 00:07:56.938 "rw_mbytes_per_sec": 0, 00:07:56.938 "r_mbytes_per_sec": 0, 00:07:56.938 "w_mbytes_per_sec": 0 00:07:56.938 }, 00:07:56.938 "claimed": false, 00:07:56.938 "zoned": false, 00:07:56.938 "supported_io_types": { 00:07:56.938 "read": true, 00:07:56.938 "write": true, 00:07:56.938 "unmap": true, 00:07:56.938 "flush": true, 00:07:56.938 "reset": true, 00:07:56.938 "nvme_admin": true, 00:07:56.938 "nvme_io": true, 00:07:56.938 "nvme_io_md": false, 00:07:56.938 "write_zeroes": true, 00:07:56.938 "zcopy": false, 00:07:56.938 "get_zone_info": false, 00:07:56.938 "zone_management": false, 00:07:56.938 "zone_append": false, 00:07:56.938 "compare": true, 00:07:56.938 "compare_and_write": true, 00:07:56.938 "abort": true, 00:07:56.938 "seek_hole": false, 00:07:56.938 "seek_data": false, 00:07:56.938 "copy": true, 00:07:56.938 "nvme_iov_md": false 00:07:56.938 }, 00:07:56.938 "memory_domains": [ 00:07:56.938 { 00:07:56.938 "dma_device_id": "system", 00:07:56.938 "dma_device_type": 1 00:07:56.938 } 00:07:56.938 ], 00:07:56.938 "driver_specific": { 00:07:56.938 "nvme": [ 00:07:56.938 { 00:07:56.938 "trid": { 00:07:56.938 "trtype": "TCP", 00:07:56.938 "adrfam": "IPv4", 00:07:56.938 "traddr": "10.0.0.2", 00:07:56.938 "trsvcid": "4420", 00:07:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:56.938 }, 00:07:56.938 "ctrlr_data": { 00:07:56.938 "cntlid": 1, 00:07:56.938 "vendor_id": "0x8086", 00:07:56.938 "model_number": "SPDK bdev Controller", 00:07:56.938 "serial_number": "SPDK0", 00:07:56.938 "firmware_revision": "24.09", 00:07:56.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.938 "oacs": { 00:07:56.938 "security": 0, 00:07:56.938 "format": 0, 00:07:56.938 "firmware": 0, 00:07:56.938 "ns_manage": 0 00:07:56.938 }, 00:07:56.938 "multi_ctrlr": true, 00:07:56.938 "ana_reporting": false 00:07:56.938 }, 00:07:56.938 "vs": { 00:07:56.938 "nvme_version": "1.3" 00:07:56.938 }, 00:07:56.938 "ns_data": { 00:07:56.938 "id": 1, 00:07:56.938 "can_share": true 00:07:56.938 } 00:07:56.938 } 00:07:56.938 ], 00:07:56.938 "mp_policy": "active_passive" 00:07:56.938 } 00:07:56.938 } 00:07:56.938 ] 00:07:56.938 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=831519 00:07:56.938 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:56.939 14:09:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:56.939 Running I/O for 10 seconds... 00:07:57.877 Latency(us) 00:07:57.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.877 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.877 Nvme0n1 : 1.00 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:07:57.877 =================================================================================================================== 00:07:57.877 Total : 15749.00 61.52 0.00 0.00 0.00 0.00 0.00 00:07:57.877 00:07:58.814 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:07:59.072 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.072 Nvme0n1 : 2.00 15939.00 62.26 0.00 0.00 0.00 0.00 0.00 00:07:59.072 =================================================================================================================== 00:07:59.072 Total : 15939.00 62.26 0.00 0.00 0.00 0.00 0.00 00:07:59.072 00:07:59.330 true 00:07:59.330 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:07:59.330 14:09:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:59.589 14:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:59.589 14:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:59.589 14:09:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 831519 00:08:00.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.168 Nvme0n1 : 3.00 16044.67 62.67 0.00 0.00 0.00 0.00 0.00 00:08:00.168 =================================================================================================================== 00:08:00.168 Total : 16044.67 62.67 0.00 0.00 0.00 0.00 0.00 00:08:00.168 00:08:01.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.107 Nvme0n1 : 4.00 16090.75 62.85 0.00 0.00 0.00 0.00 0.00 00:08:01.107 =================================================================================================================== 00:08:01.107 Total : 16090.75 62.85 0.00 0.00 0.00 0.00 0.00 00:08:01.107 00:08:02.046 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.046 Nvme0n1 : 5.00 16123.80 62.98 0.00 0.00 0.00 0.00 0.00 00:08:02.046 =================================================================================================================== 00:08:02.046 Total : 16123.80 62.98 0.00 0.00 0.00 0.00 0.00 00:08:02.046 00:08:02.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.983 Nvme0n1 : 6.00 16188.17 63.24 0.00 0.00 0.00 0.00 0.00 00:08:02.983 =================================================================================================================== 00:08:02.983 Total : 16188.17 63.24 0.00 0.00 0.00 0.00 0.00 00:08:02.983 00:08:03.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.918 Nvme0n1 : 7.00 16234.43 63.42 0.00 0.00 0.00 0.00 0.00 00:08:03.918 =================================================================================================================== 00:08:03.918 Total : 16234.43 63.42 0.00 0.00 0.00 0.00 0.00 00:08:03.918 00:08:05.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.329 Nvme0n1 : 8.00 16261.00 63.52 0.00 0.00 0.00 0.00 0.00 00:08:05.329 =================================================================================================================== 00:08:05.329 Total : 16261.00 63.52 0.00 0.00 0.00 0.00 0.00 00:08:05.329 00:08:05.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.897 Nvme0n1 : 9.00 16288.67 63.63 0.00 0.00 0.00 0.00 0.00 00:08:05.897 =================================================================================================================== 00:08:05.897 Total : 16288.67 63.63 0.00 0.00 0.00 0.00 0.00 00:08:05.897 00:08:07.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.278 Nvme0n1 : 10.00 16310.80 63.71 0.00 0.00 0.00 0.00 0.00 00:08:07.278 =================================================================================================================== 00:08:07.278 Total : 16310.80 63.71 0.00 0.00 0.00 0.00 0.00 00:08:07.278 00:08:07.278 00:08:07.278 Latency(us) 00:08:07.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.278 Nvme0n1 : 10.01 16310.25 63.71 0.00 0.00 7843.26 3470.98 16796.63 00:08:07.278 =================================================================================================================== 00:08:07.278 Total : 16310.25 63.71 0.00 0.00 7843.26 3470.98 16796.63 00:08:07.278 0 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 831397 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 831397 ']' 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 831397 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 831397 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 831397' 00:08:07.278 killing process with pid 831397 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 831397 00:08:07.278 Received shutdown signal, test time was about 10.000000 seconds 00:08:07.278 00:08:07.278 Latency(us) 00:08:07.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.278 =================================================================================================================== 00:08:07.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 831397 00:08:07.278 14:09:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:07.536 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:07.794 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:07.794 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 828280 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 828280 00:08:08.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 828280 Killed "${NVMF_APP[@]}" "$@" 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=832757 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 832757 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 832757 ']' 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.051 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.311 [2024-07-25 14:09:37.716425] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:08.311 [2024-07-25 14:09:37.716517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.311 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.311 [2024-07-25 14:09:37.781652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.311 [2024-07-25 14:09:37.887090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.311 [2024-07-25 14:09:37.887148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.311 [2024-07-25 14:09:37.887177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.311 [2024-07-25 14:09:37.887189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.311 [2024-07-25 14:09:37.887199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.311 [2024-07-25 14:09:37.887234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.571 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:08.571 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:08.571 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:08.571 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:08.571 14:09:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:08.571 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.571 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:08.829 [2024-07-25 14:09:38.253823] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:08.829 [2024-07-25 14:09:38.253965] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:08.829 [2024-07-25 14:09:38.254013] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:08.829 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:08.830 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:09.088 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 -t 2000 00:08:09.348 [ 00:08:09.348 { 00:08:09.348 "name": "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0", 00:08:09.348 "aliases": [ 00:08:09.348 "lvs/lvol" 00:08:09.348 ], 00:08:09.348 "product_name": "Logical Volume", 00:08:09.348 "block_size": 4096, 00:08:09.348 "num_blocks": 38912, 00:08:09.348 "uuid": "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0", 00:08:09.348 "assigned_rate_limits": { 00:08:09.348 "rw_ios_per_sec": 0, 00:08:09.348 "rw_mbytes_per_sec": 0, 00:08:09.348 "r_mbytes_per_sec": 0, 00:08:09.348 "w_mbytes_per_sec": 0 00:08:09.348 }, 00:08:09.348 "claimed": false, 00:08:09.348 "zoned": false, 00:08:09.348 "supported_io_types": { 00:08:09.348 "read": true, 00:08:09.348 "write": true, 00:08:09.348 "unmap": true, 00:08:09.348 "flush": false, 00:08:09.348 "reset": true, 00:08:09.348 "nvme_admin": false, 00:08:09.348 "nvme_io": false, 00:08:09.348 "nvme_io_md": false, 00:08:09.348 "write_zeroes": true, 00:08:09.348 "zcopy": false, 00:08:09.348 "get_zone_info": false, 00:08:09.348 "zone_management": false, 00:08:09.348 "zone_append": false, 00:08:09.348 "compare": false, 00:08:09.348 "compare_and_write": false, 00:08:09.348 "abort": false, 00:08:09.348 "seek_hole": true, 00:08:09.348 "seek_data": true, 00:08:09.348 "copy": false, 00:08:09.348 "nvme_iov_md": false 00:08:09.348 }, 00:08:09.348 "driver_specific": { 00:08:09.348 "lvol": { 00:08:09.348 "lvol_store_uuid": "51b4d2d2-488e-45fc-804a-c7f172dbb96b", 00:08:09.348 "base_bdev": "aio_bdev", 00:08:09.348 "thin_provision": false, 00:08:09.348 "num_allocated_clusters": 38, 00:08:09.348 "snapshot": false, 00:08:09.348 "clone": false, 00:08:09.348 "esnap_clone": false 00:08:09.348 } 00:08:09.348 } 00:08:09.348 } 00:08:09.348 ] 00:08:09.348 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:09.348 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:09.348 14:09:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:09.607 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:09.607 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:09.607 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:09.867 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:09.867 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.127 [2024-07-25 14:09:39.587208] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:10.127 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:10.386 request: 00:08:10.386 { 00:08:10.386 "uuid": "51b4d2d2-488e-45fc-804a-c7f172dbb96b", 00:08:10.386 "method": "bdev_lvol_get_lvstores", 00:08:10.386 "req_id": 1 00:08:10.386 } 00:08:10.386 Got JSON-RPC error response 00:08:10.386 response: 00:08:10.386 { 00:08:10.386 "code": -19, 00:08:10.386 "message": "No such device" 00:08:10.386 } 00:08:10.386 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:10.386 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:10.386 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:10.386 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:10.387 14:09:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.646 aio_bdev 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:10.646 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.904 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 -t 2000 00:08:11.164 [ 00:08:11.164 { 00:08:11.164 "name": "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0", 00:08:11.164 "aliases": [ 00:08:11.164 "lvs/lvol" 00:08:11.164 ], 00:08:11.164 "product_name": "Logical Volume", 00:08:11.164 "block_size": 4096, 00:08:11.164 "num_blocks": 38912, 00:08:11.164 "uuid": "67e4cf72-86ef-4caf-a12d-1e9bec2f1df0", 00:08:11.164 "assigned_rate_limits": { 00:08:11.164 "rw_ios_per_sec": 0, 00:08:11.164 "rw_mbytes_per_sec": 0, 00:08:11.164 "r_mbytes_per_sec": 0, 00:08:11.164 "w_mbytes_per_sec": 0 00:08:11.164 }, 00:08:11.164 "claimed": false, 00:08:11.164 "zoned": false, 00:08:11.164 "supported_io_types": { 00:08:11.164 "read": true, 00:08:11.164 "write": true, 00:08:11.164 "unmap": true, 00:08:11.164 "flush": false, 00:08:11.164 "reset": true, 00:08:11.164 "nvme_admin": false, 00:08:11.164 "nvme_io": false, 00:08:11.164 "nvme_io_md": false, 00:08:11.164 "write_zeroes": true, 00:08:11.164 "zcopy": false, 00:08:11.164 "get_zone_info": false, 00:08:11.164 "zone_management": false, 00:08:11.164 "zone_append": false, 00:08:11.164 "compare": false, 00:08:11.164 "compare_and_write": false, 00:08:11.164 "abort": false, 00:08:11.164 "seek_hole": true, 00:08:11.164 "seek_data": true, 00:08:11.164 "copy": false, 00:08:11.164 "nvme_iov_md": false 00:08:11.164 }, 00:08:11.164 "driver_specific": { 00:08:11.164 "lvol": { 00:08:11.164 "lvol_store_uuid": "51b4d2d2-488e-45fc-804a-c7f172dbb96b", 00:08:11.164 "base_bdev": "aio_bdev", 00:08:11.164 "thin_provision": false, 00:08:11.164 "num_allocated_clusters": 38, 00:08:11.164 "snapshot": false, 00:08:11.164 "clone": false, 00:08:11.164 "esnap_clone": false 00:08:11.164 } 00:08:11.164 } 00:08:11.164 } 00:08:11.164 ] 00:08:11.164 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:11.164 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:11.164 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:11.423 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:11.423 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:11.423 14:09:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:11.683 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:11.683 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 67e4cf72-86ef-4caf-a12d-1e9bec2f1df0 00:08:11.942 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 51b4d2d2-488e-45fc-804a-c7f172dbb96b 00:08:12.202 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.202 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.463 00:08:12.463 real 0m19.049s 00:08:12.463 user 0m48.363s 00:08:12.463 sys 0m4.559s 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:12.463 ************************************ 00:08:12.463 END TEST lvs_grow_dirty 00:08:12.463 ************************************ 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:12.463 nvmf_trace.0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.463 rmmod nvme_tcp 00:08:12.463 rmmod nvme_fabrics 00:08:12.463 rmmod nvme_keyring 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 832757 ']' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 832757 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 832757 ']' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 832757 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.463 14:09:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 832757 00:08:12.463 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.463 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.463 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 832757' 00:08:12.463 killing process with pid 832757 00:08:12.463 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 832757 00:08:12.463 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 832757 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.723 14:09:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.261 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.261 00:08:15.261 real 0m41.956s 00:08:15.261 user 1m10.923s 00:08:15.261 sys 0m8.401s 00:08:15.261 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.261 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.261 ************************************ 00:08:15.261 END TEST nvmf_lvs_grow 00:08:15.262 ************************************ 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:15.262 ************************************ 00:08:15.262 START TEST nvmf_bdev_io_wait 00:08:15.262 ************************************ 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:15.262 * Looking for test storage... 00:08:15.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.262 14:09:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:17.167 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:17.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:17.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:17.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:17.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:17.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:08:17.168 00:08:17.168 --- 10.0.0.2 ping statistics --- 00:08:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.168 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:08:17.168 00:08:17.168 --- 10.0.0.1 ping statistics --- 00:08:17.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.168 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:17.168 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=835281 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 835281 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 835281 ']' 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:17.169 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.169 [2024-07-25 14:09:46.753211] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:17.169 [2024-07-25 14:09:46.753297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.169 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.169 [2024-07-25 14:09:46.818567] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.428 [2024-07-25 14:09:46.927742] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:17.428 [2024-07-25 14:09:46.927797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:17.428 [2024-07-25 14:09:46.927824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:17.428 [2024-07-25 14:09:46.927839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:17.428 [2024-07-25 14:09:46.927848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:17.428 [2024-07-25 14:09:46.927965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.428 [2024-07-25 14:09:46.928071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.428 [2024-07-25 14:09:46.928131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.428 [2024-07-25 14:09:46.928135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.429 14:09:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.429 [2024-07-25 14:09:47.069444] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.429 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.688 Malloc0 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:17.688 [2024-07-25 14:09:47.130458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=835428 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=835430 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:17.688 { 00:08:17.688 "params": { 00:08:17.688 "name": "Nvme$subsystem", 00:08:17.688 "trtype": "$TEST_TRANSPORT", 00:08:17.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.688 "adrfam": "ipv4", 00:08:17.688 "trsvcid": "$NVMF_PORT", 00:08:17.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.688 "hdgst": ${hdgst:-false}, 00:08:17.688 "ddgst": ${ddgst:-false} 00:08:17.688 }, 00:08:17.688 "method": "bdev_nvme_attach_controller" 00:08:17.688 } 00:08:17.688 EOF 00:08:17.688 )") 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=835432 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:17.688 { 00:08:17.688 "params": { 00:08:17.688 "name": "Nvme$subsystem", 00:08:17.688 "trtype": "$TEST_TRANSPORT", 00:08:17.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.688 "adrfam": "ipv4", 00:08:17.688 "trsvcid": "$NVMF_PORT", 00:08:17.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.688 "hdgst": ${hdgst:-false}, 00:08:17.688 "ddgst": ${ddgst:-false} 00:08:17.688 }, 00:08:17.688 "method": "bdev_nvme_attach_controller" 00:08:17.688 } 00:08:17.688 EOF 00:08:17.688 )") 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=835435 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:17.688 { 00:08:17.688 "params": { 00:08:17.688 "name": "Nvme$subsystem", 00:08:17.688 "trtype": "$TEST_TRANSPORT", 00:08:17.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.688 "adrfam": "ipv4", 00:08:17.688 "trsvcid": "$NVMF_PORT", 00:08:17.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.688 "hdgst": ${hdgst:-false}, 00:08:17.688 "ddgst": ${ddgst:-false} 00:08:17.688 }, 00:08:17.688 "method": "bdev_nvme_attach_controller" 00:08:17.688 } 00:08:17.688 EOF 00:08:17.688 )") 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:17.688 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:17.689 { 00:08:17.689 "params": { 00:08:17.689 "name": "Nvme$subsystem", 00:08:17.689 "trtype": "$TEST_TRANSPORT", 00:08:17.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:17.689 "adrfam": "ipv4", 00:08:17.689 "trsvcid": "$NVMF_PORT", 00:08:17.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:17.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:17.689 "hdgst": ${hdgst:-false}, 00:08:17.689 "ddgst": ${ddgst:-false} 00:08:17.689 }, 00:08:17.689 "method": "bdev_nvme_attach_controller" 00:08:17.689 } 00:08:17.689 EOF 00:08:17.689 )") 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 835428 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:17.689 "params": { 00:08:17.689 "name": "Nvme1", 00:08:17.689 "trtype": "tcp", 00:08:17.689 "traddr": "10.0.0.2", 00:08:17.689 "adrfam": "ipv4", 00:08:17.689 "trsvcid": "4420", 00:08:17.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.689 "hdgst": false, 00:08:17.689 "ddgst": false 00:08:17.689 }, 00:08:17.689 "method": "bdev_nvme_attach_controller" 00:08:17.689 }' 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:17.689 "params": { 00:08:17.689 "name": "Nvme1", 00:08:17.689 "trtype": "tcp", 00:08:17.689 "traddr": "10.0.0.2", 00:08:17.689 "adrfam": "ipv4", 00:08:17.689 "trsvcid": "4420", 00:08:17.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.689 "hdgst": false, 00:08:17.689 "ddgst": false 00:08:17.689 }, 00:08:17.689 "method": "bdev_nvme_attach_controller" 00:08:17.689 }' 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:17.689 "params": { 00:08:17.689 "name": "Nvme1", 00:08:17.689 "trtype": "tcp", 00:08:17.689 "traddr": "10.0.0.2", 00:08:17.689 "adrfam": "ipv4", 00:08:17.689 "trsvcid": "4420", 00:08:17.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.689 "hdgst": false, 00:08:17.689 "ddgst": false 00:08:17.689 }, 00:08:17.689 "method": "bdev_nvme_attach_controller" 00:08:17.689 }' 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:17.689 14:09:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:17.689 "params": { 00:08:17.689 "name": "Nvme1", 00:08:17.689 "trtype": "tcp", 00:08:17.689 "traddr": "10.0.0.2", 00:08:17.689 "adrfam": "ipv4", 00:08:17.689 "trsvcid": "4420", 00:08:17.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:17.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:17.689 "hdgst": false, 00:08:17.689 "ddgst": false 00:08:17.689 }, 00:08:17.689 "method": "bdev_nvme_attach_controller" 00:08:17.689 }' 00:08:17.689 [2024-07-25 14:09:47.177337] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:17.689 [2024-07-25 14:09:47.177337] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:17.689 [2024-07-25 14:09:47.177338] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:17.689 [2024-07-25 14:09:47.177336] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:17.689 [2024-07-25 14:09:47.177447] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 14:09:47.177448] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 14:09:47.177449] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 14:09:47.177449] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:17.689 --proc-type=auto ] 00:08:17.689 --proc-type=auto ] 00:08:17.689 --proc-type=auto ] 00:08:17.689 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.689 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.947 [2024-07-25 14:09:47.342454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.947 [2024-07-25 14:09:47.439587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:17.947 [2024-07-25 14:09:47.443189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.947 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.947 [2024-07-25 14:09:47.539942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:17.947 [2024-07-25 14:09:47.541551] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.205 [2024-07-25 14:09:47.640312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.205 [2024-07-25 14:09:47.642998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.205 [2024-07-25 14:09:47.744834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:18.205 Running I/O for 1 seconds... 00:08:18.464 Running I/O for 1 seconds... 00:08:18.464 Running I/O for 1 seconds... 00:08:18.464 Running I/O for 1 seconds... 00:08:19.402 00:08:19.402 Latency(us) 00:08:19.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.402 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:19.402 Nvme1n1 : 1.01 11272.46 44.03 0.00 0.00 11312.41 6456.51 19612.25 00:08:19.402 =================================================================================================================== 00:08:19.402 Total : 11272.46 44.03 0.00 0.00 11312.41 6456.51 19612.25 00:08:19.402 00:08:19.402 Latency(us) 00:08:19.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.402 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:19.402 Nvme1n1 : 1.02 5215.66 20.37 0.00 0.00 24254.41 11116.85 40001.23 00:08:19.402 =================================================================================================================== 00:08:19.402 Total : 5215.66 20.37 0.00 0.00 24254.41 11116.85 40001.23 00:08:19.402 00:08:19.402 Latency(us) 00:08:19.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.403 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:19.403 Nvme1n1 : 1.00 172877.74 675.30 0.00 0.00 737.36 268.52 1171.15 00:08:19.403 =================================================================================================================== 00:08:19.403 Total : 172877.74 675.30 0.00 0.00 737.36 268.52 1171.15 00:08:19.403 00:08:19.403 Latency(us) 00:08:19.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.403 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:19.403 Nvme1n1 : 1.01 5469.17 21.36 0.00 0.00 23314.58 6553.60 51652.08 00:08:19.403 =================================================================================================================== 00:08:19.403 Total : 5469.17 21.36 0.00 0.00 23314.58 6553.60 51652.08 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 835430 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 835432 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 835435 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.662 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.662 rmmod nvme_tcp 00:08:19.922 rmmod nvme_fabrics 00:08:19.922 rmmod nvme_keyring 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 835281 ']' 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 835281 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 835281 ']' 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 835281 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835281 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835281' 00:08:19.922 killing process with pid 835281 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 835281 00:08:19.922 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 835281 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.183 14:09:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.089 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.090 00:08:22.090 real 0m7.312s 00:08:22.090 user 0m16.448s 00:08:22.090 sys 0m3.664s 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.090 ************************************ 00:08:22.090 END TEST nvmf_bdev_io_wait 00:08:22.090 ************************************ 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.090 ************************************ 00:08:22.090 START TEST nvmf_queue_depth 00:08:22.090 ************************************ 00:08:22.090 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.348 * Looking for test storage... 00:08:22.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.348 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.349 14:09:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.300 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:24.301 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:24.301 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:24.301 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:24.301 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.301 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.560 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.560 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.560 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.560 14:09:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:08:24.560 00:08:24.560 --- 10.0.0.2 ping statistics --- 00:08:24.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.560 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:08:24.560 00:08:24.560 --- 10.0.0.1 ping statistics --- 00:08:24.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.560 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=837651 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 837651 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 837651 ']' 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.560 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.560 [2024-07-25 14:09:54.107940] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:24.560 [2024-07-25 14:09:54.108041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.560 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.560 [2024-07-25 14:09:54.175161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.820 [2024-07-25 14:09:54.290388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.820 [2024-07-25 14:09:54.290447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.820 [2024-07-25 14:09:54.290471] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.820 [2024-07-25 14:09:54.290490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.820 [2024-07-25 14:09:54.290504] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.820 [2024-07-25 14:09:54.290562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:24.820 [2024-07-25 14:09:54.436988] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.820 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 Malloc0 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 [2024-07-25 14:09:54.495987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=837687 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 837687 /var/tmp/bdevperf.sock 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 837687 ']' 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.079 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.079 [2024-07-25 14:09:54.539885] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:25.079 [2024-07-25 14:09:54.539947] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837687 ] 00:08:25.079 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.079 [2024-07-25 14:09:54.596722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.079 [2024-07-25 14:09:54.703252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.338 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.338 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:25.338 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:25.338 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.338 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.597 NVMe0n1 00:08:25.597 14:09:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.597 14:09:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.597 Running I/O for 10 seconds... 00:08:35.581 00:08:35.581 Latency(us) 00:08:35.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.581 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:35.581 Verification LBA range: start 0x0 length 0x4000 00:08:35.581 NVMe0n1 : 10.07 9375.49 36.62 0.00 0.00 108732.33 17282.09 70293.43 00:08:35.581 =================================================================================================================== 00:08:35.581 Total : 9375.49 36.62 0.00 0.00 108732.33 17282.09 70293.43 00:08:35.581 0 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 837687 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 837687 ']' 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 837687 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.581 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837687 00:08:35.840 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:35.841 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:35.841 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837687' 00:08:35.841 killing process with pid 837687 00:08:35.841 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 837687 00:08:35.841 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.841 00:08:35.841 Latency(us) 00:08:35.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.841 =================================================================================================================== 00:08:35.841 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.841 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 837687 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:36.100 rmmod nvme_tcp 00:08:36.100 rmmod nvme_fabrics 00:08:36.100 rmmod nvme_keyring 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 837651 ']' 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 837651 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 837651 ']' 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 837651 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 837651 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 837651' 00:08:36.100 killing process with pid 837651 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 837651 00:08:36.100 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 837651 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.358 14:10:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:38.897 00:08:38.897 real 0m16.205s 00:08:38.897 user 0m22.766s 00:08:38.897 sys 0m3.110s 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:38.897 ************************************ 00:08:38.897 END TEST nvmf_queue_depth 00:08:38.897 ************************************ 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:38.897 ************************************ 00:08:38.897 START TEST nvmf_target_multipath 00:08:38.897 ************************************ 00:08:38.897 14:10:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:38.897 * Looking for test storage... 00:08:38.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.897 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:38.898 14:10:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:40.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:40.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.806 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:40.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:40.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:40.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:08:40.807 00:08:40.807 --- 10.0.0.2 ping statistics --- 00:08:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.807 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:08:40.807 00:08:40.807 --- 10.0.0.1 ping statistics --- 00:08:40.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.807 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:40.807 only one NIC for nvmf test 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:40.807 rmmod nvme_tcp 00:08:40.807 rmmod nvme_fabrics 00:08:40.807 rmmod nvme_keyring 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:40.807 14:10:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:43.343 00:08:43.343 real 0m4.430s 00:08:43.343 user 0m0.916s 00:08:43.343 sys 0m1.518s 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:43.343 ************************************ 00:08:43.343 END TEST nvmf_target_multipath 00:08:43.343 ************************************ 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.343 ************************************ 00:08:43.343 START TEST nvmf_zcopy 00:08:43.343 ************************************ 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:43.343 * Looking for test storage... 00:08:43.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.343 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:43.344 14:10:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.252 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.253 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.253 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.253 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.253 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:08:45.253 00:08:45.253 --- 10.0.0.2 ping statistics --- 00:08:45.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.253 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:45.253 00:08:45.253 --- 10.0.0.1 ping statistics --- 00:08:45.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.253 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=842874 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 842874 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 842874 ']' 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.253 14:10:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.253 [2024-07-25 14:10:14.857446] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:45.253 [2024-07-25 14:10:14.857517] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.253 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.513 [2024-07-25 14:10:14.920470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.513 [2024-07-25 14:10:15.027295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.513 [2024-07-25 14:10:15.027371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.513 [2024-07-25 14:10:15.027393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.513 [2024-07-25 14:10:15.027419] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.513 [2024-07-25 14:10:15.027434] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.513 [2024-07-25 14:10:15.027486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.513 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.773 [2024-07-25 14:10:15.167496] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.773 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.774 [2024-07-25 14:10:15.183663] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.774 malloc0 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:45.774 { 00:08:45.774 "params": { 00:08:45.774 "name": "Nvme$subsystem", 00:08:45.774 "trtype": "$TEST_TRANSPORT", 00:08:45.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.774 "adrfam": "ipv4", 00:08:45.774 "trsvcid": "$NVMF_PORT", 00:08:45.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.774 "hdgst": ${hdgst:-false}, 00:08:45.774 "ddgst": ${ddgst:-false} 00:08:45.774 }, 00:08:45.774 "method": "bdev_nvme_attach_controller" 00:08:45.774 } 00:08:45.774 EOF 00:08:45.774 )") 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:45.774 14:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:45.774 "params": { 00:08:45.774 "name": "Nvme1", 00:08:45.774 "trtype": "tcp", 00:08:45.774 "traddr": "10.0.0.2", 00:08:45.774 "adrfam": "ipv4", 00:08:45.774 "trsvcid": "4420", 00:08:45.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:45.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:45.774 "hdgst": false, 00:08:45.774 "ddgst": false 00:08:45.774 }, 00:08:45.774 "method": "bdev_nvme_attach_controller" 00:08:45.774 }' 00:08:45.774 [2024-07-25 14:10:15.277660] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:45.774 [2024-07-25 14:10:15.277751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842957 ] 00:08:45.774 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.774 [2024-07-25 14:10:15.341987] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.034 [2024-07-25 14:10:15.451353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.034 Running I/O for 10 seconds... 00:08:58.261 00:08:58.261 Latency(us) 00:08:58.261 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:58.261 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:58.261 Verification LBA range: start 0x0 length 0x1000 00:08:58.261 Nvme1n1 : 10.01 5981.26 46.73 0.00 0.00 21340.42 3883.61 28932.93 00:08:58.261 =================================================================================================================== 00:08:58.261 Total : 5981.26 46.73 0.00 0.00 21340.42 3883.61 28932.93 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844212 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:58.261 { 00:08:58.261 "params": { 00:08:58.261 "name": "Nvme$subsystem", 00:08:58.261 "trtype": "$TEST_TRANSPORT", 00:08:58.261 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:58.261 "adrfam": "ipv4", 00:08:58.261 "trsvcid": "$NVMF_PORT", 00:08:58.261 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:58.261 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:58.261 "hdgst": ${hdgst:-false}, 00:08:58.261 "ddgst": ${ddgst:-false} 00:08:58.261 }, 00:08:58.261 "method": "bdev_nvme_attach_controller" 00:08:58.261 } 00:08:58.261 EOF 00:08:58.261 )") 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:58.261 [2024-07-25 14:10:25.989760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.261 [2024-07-25 14:10:25.989805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:58.261 14:10:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:58.261 "params": { 00:08:58.261 "name": "Nvme1", 00:08:58.261 "trtype": "tcp", 00:08:58.261 "traddr": "10.0.0.2", 00:08:58.261 "adrfam": "ipv4", 00:08:58.261 "trsvcid": "4420", 00:08:58.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:58.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:58.261 "hdgst": false, 00:08:58.261 "ddgst": false 00:08:58.261 }, 00:08:58.261 "method": "bdev_nvme_attach_controller" 00:08:58.261 }' 00:08:58.261 [2024-07-25 14:10:25.997707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.261 [2024-07-25 14:10:25.997731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.261 [2024-07-25 14:10:26.005728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.261 [2024-07-25 14:10:26.005750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.261 [2024-07-25 14:10:26.013749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.261 [2024-07-25 14:10:26.013772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.261 [2024-07-25 14:10:26.021772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.261 [2024-07-25 14:10:26.021794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.261 [2024-07-25 14:10:26.029327] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:08:58.261 [2024-07-25 14:10:26.029419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844212 ] 00:08:58.261 [2024-07-25 14:10:26.029812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.029841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.037816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.037839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.045837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.045860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.053857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.053879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.262 [2024-07-25 14:10:26.061883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.061906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.069902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.069924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.077925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.077947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.085945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.085967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.088149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.262 [2024-07-25 14:10:26.093995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.094026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.102029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.102088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.110012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.110048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.118032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.118075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.126077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.126100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.134095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.134132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.142133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.142156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.150183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.150217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.158200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.158236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.166189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.166213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.174202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.174224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.182222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.182245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.190247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.190272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.198268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.198292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.200467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.262 [2024-07-25 14:10:26.206287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.206311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.214316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.214355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.222394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.222443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.230427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.230464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.238443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.238480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.246472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.246519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.254471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.254511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.262507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.262546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.270485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.270508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.278538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.278575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.286556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.286593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.294556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.294583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.302562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.302583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.310584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.310605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.318615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.318638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.326632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.326655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.334653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.334675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.342675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.342697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.350698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.350720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.358719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.358742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.366741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.366762] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.374763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.374784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.382786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.382808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.390811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.390832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.398833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.262 [2024-07-25 14:10:26.398855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.262 [2024-07-25 14:10:26.406857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.406880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.414879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.414901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.422902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.422924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.430925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.430947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.438943] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.438964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.446970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.446993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.454989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.455010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.463017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.463056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 Running I/O for 5 seconds... 00:08:58.263 [2024-07-25 14:10:26.471051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.471081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.483591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.483636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.493959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.493989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.504537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.504566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.515469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.515499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.526289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.526318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.537256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.537284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.550396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.550425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.560856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.560885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.571145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.571173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.581675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.581703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.591966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.591994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.602773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.602802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.615027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.615055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.625373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.625401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.636287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.636315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.648640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.648668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.658423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.658450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.668807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.668835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.679462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.679489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.690451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.690479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.701276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.701304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.713908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.713936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.724109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.724137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.735211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.735238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.748406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.748434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.758442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.758470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.769268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.769305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.781889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.781917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.793792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.793820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.803235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.263 [2024-07-25 14:10:26.803263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.263 [2024-07-25 14:10:26.814531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.814559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.824889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.824917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.835345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.835373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.848102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.848130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.858182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.858210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.868612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.868640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.879227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.879255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.889545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.889573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.900287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.900315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.912196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.912224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.921557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.921585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.933166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.933195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.943569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.943597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.954146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.954175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.966412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.966441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.976234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.976275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.986741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.986770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:26.997608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:26.997636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.010498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.010526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.020614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.020641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.031018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.031046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.041913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.041941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.054379] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.054409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.064791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.064820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.075212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.075240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.085626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.085654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.096302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.096330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.106664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.106693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.117503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.117531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.128403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.128430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.140926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.140954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.150708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.150736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.161718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.161746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.174521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.174548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.184899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.184935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.195610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.195638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.208197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.208225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.219850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.219878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.228647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.228675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.240045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.240085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.253330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.253358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.263554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.263582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.274547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.274575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.286919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.286947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.297284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.297312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.307978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.308006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.318923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.318951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.329387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.329415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.339911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.339939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.350461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.350489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.362887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.362915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.372676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.372704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.264 [2024-07-25 14:10:27.382840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.264 [2024-07-25 14:10:27.382868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.393865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.393899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.404424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.404452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.414937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.414965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.425089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.425117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.435472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.435500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.445917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.445946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.456173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.456201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.466466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.466494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.476904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.476931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.487117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.487145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.497549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.497576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.508128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.508157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.520691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.520719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.530797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.530825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.541291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.541319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.551970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.551998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.562488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.562516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.572787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.572815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.583356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.583384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.594024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.594052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.604705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.604732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.615602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.615630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.626516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.626545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.637229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.637258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.647982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.648011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.660222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.660251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.670490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.670533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.681494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.681523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.693769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.693798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.703139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.703168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.714389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.714417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.726737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.726766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.736339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.736378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.747265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.747294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.758355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.758383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.768750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.768778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.779363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.779391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.789818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.789846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.800419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.800447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.811101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.811143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.823945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.823973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.834224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.834252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.844419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.844447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.855226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.855253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.867783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.867811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.878085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.878112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.888387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.888415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.898899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.898927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.265 [2024-07-25 14:10:27.909386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.265 [2024-07-25 14:10:27.909413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.919726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.919754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.930579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.930606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.941233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.941261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.951877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.951905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.962408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.962435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.973083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.973110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.984070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.984097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:27.994995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:27.995023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.006223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.006251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.018819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.018848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.029042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.029077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.039529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.039571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.050585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.050613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.061819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.061848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.072212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.072239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.083161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.083189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.096142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.096170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.106586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.106613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.116769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.116796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.127528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.127557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.138001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.138043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.148244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.148272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.158837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.526 [2024-07-25 14:10:28.158864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.526 [2024-07-25 14:10:28.169596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.527 [2024-07-25 14:10:28.169625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.180155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.180184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.192786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.192814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.204256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.204284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.213367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.213395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.224873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.224901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.237489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.237517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.249213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.249241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.257862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.257889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.270749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.270776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.281103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.281131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.291294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.291322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.302013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.302041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.314474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.314502] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.326411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.326439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.334836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.334863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.348019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.348047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.358249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.358277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.368863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.368891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.381465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.381493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.391858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.391885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.402381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.402409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.412733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.412769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.423516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.423543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.787 [2024-07-25 14:10:28.436130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.787 [2024-07-25 14:10:28.436159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.446536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.446565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.457204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.457231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.468250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.468277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.478933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.478961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.491152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.491179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.500583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.500612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.513305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.513334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.525120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.525147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.533971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.533999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.545256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.545284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.556336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.556364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.567178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.567205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.578091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.578118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.589388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.589416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.599863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.599890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.610998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.051 [2024-07-25 14:10:28.611026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.051 [2024-07-25 14:10:28.623649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.623684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.633847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.633875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.644751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.644780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.655248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.655275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.665886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.665913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.677206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.677235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.687808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.687836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.052 [2024-07-25 14:10:28.700761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.052 [2024-07-25 14:10:28.700790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.711124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.711153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.721024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.721052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.731150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.731178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.742286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.742314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.754913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.754942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.765397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.765426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.776350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.776379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.787446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.787474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.798793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.798821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.811832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.811861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.822136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.822164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.833056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.833102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.843871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.843900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.854688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.854716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.867119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.867148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.877092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.877121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.887797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.887825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.898597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.898625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.909070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.909098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.336 [2024-07-25 14:10:28.920078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.336 [2024-07-25 14:10:28.920118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.337 [2024-07-25 14:10:28.930669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.337 [2024-07-25 14:10:28.930697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.337 [2024-07-25 14:10:28.943099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.337 [2024-07-25 14:10:28.943128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.337 [2024-07-25 14:10:28.953332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.337 [2024-07-25 14:10:28.953372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.337 [2024-07-25 14:10:28.965463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.337 [2024-07-25 14:10:28.965491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:28.975602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:28.975631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:28.985878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:28.985906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:28.995869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:28.995898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.006850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.006878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.019685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.019713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.029862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.029891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.040469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.040520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.053700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.053727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.064260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.064288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.074795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.074823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.085554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.085582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.598 [2024-07-25 14:10:29.096171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.598 [2024-07-25 14:10:29.096199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.108669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.108696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.120124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.120152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.129505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.129533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.140637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.140665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.152859] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.152887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.162666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.162694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.173353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.173381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.186613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.186641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.196795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.196823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.207829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.207857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.218503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.218531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.229257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.229285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.599 [2024-07-25 14:10:29.241794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.599 [2024-07-25 14:10:29.241822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.858 [2024-07-25 14:10:29.253407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.253436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.262375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.262402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.273992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.274019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.286925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.286954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.297249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.297276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.308237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.308265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.320771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.320799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.331149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.331177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.341625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.341653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.352439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.352466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.363152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.363179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.375603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.375630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.385257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.385285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.395560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.395587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.406049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.406085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.416728] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.416756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.429259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.429287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.439554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.439581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.450226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.450254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.462550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.462578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.472025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.472052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.482955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.482982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.495709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.495737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.859 [2024-07-25 14:10:29.505928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.859 [2024-07-25 14:10:29.505956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.516601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.516629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.529097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.529125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.538845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.538873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.549837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.549864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.560546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.560574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.571387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.571415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.584483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.584511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.594367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.594395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.605036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.605073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.615705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.615733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.626616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.626644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.639400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.639428] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.651381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.651409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.660802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.660831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.672265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.672293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.684852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.684880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.695363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.695391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.705682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.705710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.716755] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.716783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.727293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.727321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.740175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.740203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.750595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.750623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.119 [2024-07-25 14:10:29.761054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.119 [2024-07-25 14:10:29.761088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.772112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.772141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.782671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.782699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.795117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.795144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.805246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.805274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.816366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.816395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.828831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.828858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.838618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.838646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.849250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.849278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.860075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.860105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.872826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.872853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.884520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.884549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.893607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.893636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.905285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.905314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.916124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.916153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.926442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.926471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.938488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.938516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.948011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.948039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.958928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.958957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.971832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.971861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.982037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.982079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:29.992849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:29.992888] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:30.005727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:30.005756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:30.017587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:30.017633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.378 [2024-07-25 14:10:30.027033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.378 [2024-07-25 14:10:30.027082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.038405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.038437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.048792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.048821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.059489] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.059516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.072257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.072285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.082334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.082376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.092510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.092537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.103144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.103171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.113624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.113652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.124054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.124091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.134578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.134606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.145280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.145308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.156102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.156138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.168305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.168332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.178169] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.178197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.189066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.189097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.199637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.199664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.210359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.210386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.638 [2024-07-25 14:10:30.222921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.638 [2024-07-25 14:10:30.222948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.232569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.232596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.242975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.243002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.253772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.253799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.266171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.266199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.276187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.276215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.639 [2024-07-25 14:10:30.286799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.639 [2024-07-25 14:10:30.286834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.297557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.297585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.309994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.310036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.319388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.319416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.329814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.329842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.340374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.340401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.352673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.352701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.361901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.361929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.373321] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.373349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.383552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.383579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.394105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.394132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.404795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.404822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.415915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.415942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.428117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.428145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.438214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.438242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.448855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.448882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.459358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.459385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.472027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.472055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.482081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.482108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.492767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.492817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.505334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.505362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.515243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.515271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.525978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.526005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.536817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.536845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.899 [2024-07-25 14:10:30.547140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.899 [2024-07-25 14:10:30.547167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.557621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.557649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.567573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.567601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.577465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.577493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.587820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.587847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.598473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.598501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.608791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.608819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.619232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.619276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.629848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.629877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.640368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.640396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.651247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.651274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.664566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.664595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.676568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.676597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.685299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.685326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.697952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.697989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.708391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.708418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.718979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.719007] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.729711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.729739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.741915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.741942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.763419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.763450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.774147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.774175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.784857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.784884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.795444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.795473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.159 [2024-07-25 14:10:30.806406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.159 [2024-07-25 14:10:30.806437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.819664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.819694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.830091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.830118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.840654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.840681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.851121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.851148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.861951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.861978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.874522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.874551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.886311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.886339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.895674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.895702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.906505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.906533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.918919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.918946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.928645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.928673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.938807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.938834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.949574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.949601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.961846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.961874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.972546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.972572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.983122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.983149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:30.993373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:30.993401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.003757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.003785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.014225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.014253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.024924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.024952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.037360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.037388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.047401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.047429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.418 [2024-07-25 14:10:31.058266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.418 [2024-07-25 14:10:31.058295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.070464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.070494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.080307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.080335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.090805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.090833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.101360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.101389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.111895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.111923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.122511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.122539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.133206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.133234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.143856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.143885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.156631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.156659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.168225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.168252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.177002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.177030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.188437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.188464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.200768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.200795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.210946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.210974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.221907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.221935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.234689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.677 [2024-07-25 14:10:31.234717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.677 [2024-07-25 14:10:31.244974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.245002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.255642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.255670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.265929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.265956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.276574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.276602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.287218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.287246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.297533] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.297560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.308124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.308151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.318945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.318973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.678 [2024-07-25 14:10:31.329575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.678 [2024-07-25 14:10:31.329602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.342199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.342227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.352143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.352171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.362672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.362700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.375319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.375347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.385305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.385332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.395408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.395436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.937 [2024-07-25 14:10:31.405806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.937 [2024-07-25 14:10:31.405834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.416271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.416298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.426623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.426651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.436767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.436794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.447633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.447660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.460016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.460043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.471454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.471481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.481010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.481037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.491335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.491362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.533270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.533296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 00:09:01.938 Latency(us) 00:09:01.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.938 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:01.938 Nvme1n1 : 5.05 11857.26 92.63 0.00 0.00 10693.11 4563.25 52428.80 00:09:01.938 =================================================================================================================== 00:09:01.938 Total : 11857.26 92.63 0.00 0.00 10693.11 4563.25 52428.80 00:09:01.938 [2024-07-25 14:10:31.539343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.539367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.547367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.547391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.555393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.555417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.563464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.563518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.571493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.571549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.579510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.579562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.938 [2024-07-25 14:10:31.587529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.938 [2024-07-25 14:10:31.587579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.197 [2024-07-25 14:10:31.595544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.197 [2024-07-25 14:10:31.595595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.197 [2024-07-25 14:10:31.603585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.197 [2024-07-25 14:10:31.603640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.197 [2024-07-25 14:10:31.611593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.611646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.619620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.619671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.627640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.627693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.635671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.635723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.643690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.643743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.651708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.651758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.659731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.659784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.667752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.667803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.675739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.675791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.683723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.683745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.691745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.691765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.699765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.699784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.707786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.707806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.715872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.715919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.723907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.723959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.731897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.731938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.739875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.739896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.747898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.747917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.755919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.755938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.763986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.764011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.772045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.772105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.780067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.780117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.788012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.788035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.796027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.796068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 [2024-07-25 14:10:31.804071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.198 [2024-07-25 14:10:31.804092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844212) - No such process 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844212 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.198 delay0 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:02.198 14:10:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:02.457 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.457 [2024-07-25 14:10:31.918866] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:10.580 Initializing NVMe Controllers 00:09:10.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:10.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:10.580 Initialization complete. Launching workers. 00:09:10.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 233, failed: 23409 00:09:10.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23528, failed to submit 114 00:09:10.580 success 23448, unsuccess 80, failed 0 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.580 rmmod nvme_tcp 00:09:10.580 rmmod nvme_fabrics 00:09:10.580 rmmod nvme_keyring 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 842874 ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 842874 ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 842874' 00:09:10.580 killing process with pid 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 842874 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.580 14:10:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:11.957 00:09:11.957 real 0m28.958s 00:09:11.957 user 0m42.304s 00:09:11.957 sys 0m9.173s 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.957 ************************************ 00:09:11.957 END TEST nvmf_zcopy 00:09:11.957 ************************************ 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.957 ************************************ 00:09:11.957 START TEST nvmf_nmic 00:09:11.957 ************************************ 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:11.957 * Looking for test storage... 00:09:11.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.957 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:11.958 14:10:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.491 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:14.492 00:09:14.492 --- 10.0.0.2 ping statistics --- 00:09:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.492 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:14.492 00:09:14.492 --- 10.0.0.1 ping statistics --- 00:09:14.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.492 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=847722 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 847722 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 847722 ']' 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:14.492 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.493 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:14.493 14:10:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 [2024-07-25 14:10:43.854256] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:09:14.493 [2024-07-25 14:10:43.854356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.493 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.493 [2024-07-25 14:10:43.920225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.493 [2024-07-25 14:10:44.032310] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.493 [2024-07-25 14:10:44.032371] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.493 [2024-07-25 14:10:44.032399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.493 [2024-07-25 14:10:44.032411] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.493 [2024-07-25 14:10:44.032421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.493 [2024-07-25 14:10:44.032475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.493 [2024-07-25 14:10:44.032534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.493 [2024-07-25 14:10:44.032599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.493 [2024-07-25 14:10:44.032603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 [2024-07-25 14:10:44.183486] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 Malloc0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 [2024-07-25 14:10:44.234667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:14.752 test case1: single bdev can't be used in multiple subsystems 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 [2024-07-25 14:10:44.258526] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:14.752 [2024-07-25 14:10:44.258555] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:14.752 [2024-07-25 14:10:44.258585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.752 request: 00:09:14.752 { 00:09:14.752 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:14.752 "namespace": { 00:09:14.752 "bdev_name": "Malloc0", 00:09:14.752 "no_auto_visible": false 00:09:14.752 }, 00:09:14.752 "method": "nvmf_subsystem_add_ns", 00:09:14.752 "req_id": 1 00:09:14.752 } 00:09:14.752 Got JSON-RPC error response 00:09:14.752 response: 00:09:14.752 { 00:09:14.752 "code": -32602, 00:09:14.752 "message": "Invalid parameters" 00:09:14.752 } 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:14.752 Adding namespace failed - expected result. 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:14.752 test case2: host connect to nvmf target in multiple paths 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.752 [2024-07-25 14:10:44.266639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.752 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:15.322 14:10:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:16.257 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.257 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:09:16.257 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.257 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:16.257 14:10:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:09:18.193 14:10:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.193 [global] 00:09:18.193 thread=1 00:09:18.193 invalidate=1 00:09:18.193 rw=write 00:09:18.193 time_based=1 00:09:18.193 runtime=1 00:09:18.193 ioengine=libaio 00:09:18.193 direct=1 00:09:18.193 bs=4096 00:09:18.193 iodepth=1 00:09:18.193 norandommap=0 00:09:18.193 numjobs=1 00:09:18.193 00:09:18.193 verify_dump=1 00:09:18.193 verify_backlog=512 00:09:18.193 verify_state_save=0 00:09:18.193 do_verify=1 00:09:18.193 verify=crc32c-intel 00:09:18.193 [job0] 00:09:18.193 filename=/dev/nvme0n1 00:09:18.193 Could not set queue depth (nvme0n1) 00:09:18.193 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.193 fio-3.35 00:09:18.193 Starting 1 thread 00:09:19.371 00:09:19.371 job0: (groupid=0, jobs=1): err= 0: pid=848244: Thu Jul 25 14:10:48 2024 00:09:19.371 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6168KiB/1001msec) 00:09:19.371 slat (nsec): min=6661, max=38414, avg=11000.61, stdev=5074.75 00:09:19.371 clat (usec): min=184, max=40966, avg=390.73, stdev=2531.40 00:09:19.371 lat (usec): min=191, max=41000, avg=401.73, stdev=2532.70 00:09:19.371 clat percentiles (usec): 00:09:19.371 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:09:19.371 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:09:19.371 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 277], 00:09:19.371 | 99.00th=[ 302], 99.50th=[ 363], 99.90th=[41157], 99.95th=[41157], 00:09:19.371 | 99.99th=[41157] 00:09:19.371 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:19.371 slat (nsec): min=8186, max=52059, avg=12940.40, stdev=6146.30 00:09:19.371 clat (usec): min=130, max=291, avg=166.90, stdev=22.91 00:09:19.371 lat (usec): min=139, max=324, avg=179.84, stdev=26.99 00:09:19.371 clat percentiles (usec): 00:09:19.371 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:09:19.371 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 163], 60.00th=[ 172], 00:09:19.371 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 208], 00:09:19.371 | 99.00th=[ 227], 99.50th=[ 239], 99.90th=[ 289], 99.95th=[ 293], 00:09:19.371 | 99.99th=[ 293] 00:09:19.371 bw ( KiB/s): min=10512, max=10512, per=100.00%, avg=10512.00, stdev= 0.00, samples=1 00:09:19.371 iops : min= 2628, max= 2628, avg=2628.00, stdev= 0.00, samples=1 00:09:19.371 lat (usec) : 250=88.44%, 500=11.39% 00:09:19.371 lat (msec) : 50=0.17% 00:09:19.371 cpu : usr=3.50%, sys=5.90%, ctx=3590, majf=0, minf=2 00:09:19.371 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:19.371 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.371 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:19.371 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:19.371 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:19.371 00:09:19.371 Run status group 0 (all jobs): 00:09:19.371 READ: bw=6162KiB/s (6310kB/s), 6162KiB/s-6162KiB/s (6310kB/s-6310kB/s), io=6168KiB (6316kB), run=1001-1001msec 00:09:19.371 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:09:19.371 00:09:19.371 Disk stats (read/write): 00:09:19.371 nvme0n1: ios=1589/2048, merge=0/0, ticks=502/337, in_queue=839, util=91.48% 00:09:19.371 14:10:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:19.371 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.371 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.629 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.630 rmmod nvme_tcp 00:09:19.630 rmmod nvme_fabrics 00:09:19.630 rmmod nvme_keyring 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 847722 ']' 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 847722 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 847722 ']' 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 847722 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 847722 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 847722' 00:09:19.630 killing process with pid 847722 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 847722 00:09:19.630 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 847722 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.888 14:10:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.423 00:09:22.423 real 0m9.992s 00:09:22.423 user 0m22.173s 00:09:22.423 sys 0m2.551s 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 ************************************ 00:09:22.423 END TEST nvmf_nmic 00:09:22.423 ************************************ 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.423 ************************************ 00:09:22.423 START TEST nvmf_fio_target 00:09:22.423 ************************************ 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:22.423 * Looking for test storage... 00:09:22.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.423 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.424 14:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.329 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.329 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.329 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.329 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:24.329 00:09:24.329 --- 10.0.0.2 ping statistics --- 00:09:24.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.329 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:09:24.329 00:09:24.329 --- 10.0.0.1 ping statistics --- 00:09:24.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.329 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:24.329 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=850345 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 850345 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 850345 ']' 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.330 14:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 [2024-07-25 14:10:53.969863] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:09:24.330 [2024-07-25 14:10:53.969930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.592 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.592 [2024-07-25 14:10:54.036526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.592 [2024-07-25 14:10:54.148624] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.592 [2024-07-25 14:10:54.148687] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.592 [2024-07-25 14:10:54.148716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.592 [2024-07-25 14:10:54.148728] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.592 [2024-07-25 14:10:54.148737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.592 [2024-07-25 14:10:54.148827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.592 [2024-07-25 14:10:54.148893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.592 [2024-07-25 14:10:54.148946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.592 [2024-07-25 14:10:54.148949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.882 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.141 [2024-07-25 14:10:54.531394] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.141 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.399 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:25.399 14:10:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.658 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:25.658 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.916 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:25.916 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.174 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:26.174 14:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:26.432 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.690 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:26.690 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.948 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:26.948 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.204 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:27.204 14:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:27.461 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:27.718 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.718 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:27.976 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:27.976 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.234 14:10:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.493 [2024-07-25 14:10:58.041707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.493 14:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:28.752 14:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:29.010 14:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:29.948 14:10:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:31.855 14:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:31.855 [global] 00:09:31.855 thread=1 00:09:31.855 invalidate=1 00:09:31.855 rw=write 00:09:31.855 time_based=1 00:09:31.855 runtime=1 00:09:31.855 ioengine=libaio 00:09:31.855 direct=1 00:09:31.855 bs=4096 00:09:31.855 iodepth=1 00:09:31.855 norandommap=0 00:09:31.855 numjobs=1 00:09:31.855 00:09:31.855 verify_dump=1 00:09:31.855 verify_backlog=512 00:09:31.855 verify_state_save=0 00:09:31.855 do_verify=1 00:09:31.855 verify=crc32c-intel 00:09:31.855 [job0] 00:09:31.855 filename=/dev/nvme0n1 00:09:31.855 [job1] 00:09:31.855 filename=/dev/nvme0n2 00:09:31.855 [job2] 00:09:31.855 filename=/dev/nvme0n3 00:09:31.855 [job3] 00:09:31.855 filename=/dev/nvme0n4 00:09:31.855 Could not set queue depth (nvme0n1) 00:09:31.855 Could not set queue depth (nvme0n2) 00:09:31.855 Could not set queue depth (nvme0n3) 00:09:31.855 Could not set queue depth (nvme0n4) 00:09:31.855 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.855 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.855 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.855 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:31.855 fio-3.35 00:09:31.855 Starting 4 threads 00:09:33.232 00:09:33.232 job0: (groupid=0, jobs=1): err= 0: pid=851400: Thu Jul 25 14:11:02 2024 00:09:33.232 read: IOPS=21, BW=85.9KiB/s (87.9kB/s)(88.0KiB/1025msec) 00:09:33.232 slat (nsec): min=8927, max=33255, avg=22083.09, stdev=8539.92 00:09:33.232 clat (usec): min=40569, max=41087, avg=40948.24, stdev=95.71 00:09:33.232 lat (usec): min=40578, max=41105, avg=40970.33, stdev=97.07 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:33.232 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.232 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.232 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.232 | 99.99th=[41157] 00:09:33.232 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:33.232 slat (nsec): min=6330, max=38571, avg=14563.69, stdev=5061.51 00:09:33.232 clat (usec): min=165, max=370, avg=221.89, stdev=21.07 00:09:33.232 lat (usec): min=190, max=378, avg=236.45, stdev=19.84 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:33.232 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:09:33.232 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 260], 00:09:33.232 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 371], 99.95th=[ 371], 00:09:33.232 | 99.99th=[ 371] 00:09:33.232 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.232 lat (usec) : 250=86.70%, 500=9.18% 00:09:33.232 lat (msec) : 50=4.12% 00:09:33.232 cpu : usr=0.29%, sys=0.88%, ctx=534, majf=0, minf=2 00:09:33.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.232 job1: (groupid=0, jobs=1): err= 0: pid=851401: Thu Jul 25 14:11:02 2024 00:09:33.232 read: IOPS=20, BW=83.6KiB/s (85.6kB/s)(84.0KiB/1005msec) 00:09:33.232 slat (nsec): min=9745, max=42714, avg=26950.52, stdev=10799.29 00:09:33.232 clat (usec): min=40868, max=42300, avg=41884.57, stdev=328.64 00:09:33.232 lat (usec): min=40902, max=42310, avg=41911.52, stdev=327.06 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:09:33.232 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:09:33.232 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:33.232 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:33.232 | 99.99th=[42206] 00:09:33.232 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:33.232 slat (nsec): min=7190, max=54969, avg=17814.61, stdev=7056.64 00:09:33.232 clat (usec): min=144, max=410, avg=220.10, stdev=47.91 00:09:33.232 lat (usec): min=154, max=434, avg=237.91, stdev=50.29 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 169], 20.00th=[ 180], 00:09:33.232 | 30.00th=[ 192], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 221], 00:09:33.232 | 70.00th=[ 237], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 314], 00:09:33.232 | 99.00th=[ 363], 99.50th=[ 392], 99.90th=[ 412], 99.95th=[ 412], 00:09:33.232 | 99.99th=[ 412] 00:09:33.232 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.232 lat (usec) : 250=75.05%, 500=21.01% 00:09:33.232 lat (msec) : 50=3.94% 00:09:33.232 cpu : usr=0.70%, sys=1.10%, ctx=534, majf=0, minf=1 00:09:33.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.232 job2: (groupid=0, jobs=1): err= 0: pid=851402: Thu Jul 25 14:11:02 2024 00:09:33.232 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:09:33.232 slat (nsec): min=12473, max=35129, avg=24950.82, stdev=9390.33 00:09:33.232 clat (usec): min=40899, max=41078, avg=40970.65, stdev=50.86 00:09:33.232 lat (usec): min=40931, max=41090, avg=40995.60, stdev=46.10 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:33.232 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.232 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.232 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.232 | 99.99th=[41157] 00:09:33.232 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:09:33.232 slat (nsec): min=6552, max=39191, avg=15425.18, stdev=5186.33 00:09:33.232 clat (usec): min=149, max=409, avg=181.03, stdev=19.88 00:09:33.232 lat (usec): min=156, max=439, avg=196.46, stdev=21.52 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:09:33.232 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:09:33.232 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:09:33.232 | 99.00th=[ 241], 99.50th=[ 318], 99.90th=[ 408], 99.95th=[ 408], 00:09:33.232 | 99.99th=[ 408] 00:09:33.232 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.232 lat (usec) : 250=94.94%, 500=0.94% 00:09:33.232 lat (msec) : 50=4.12% 00:09:33.232 cpu : usr=0.40%, sys=0.70%, ctx=536, majf=0, minf=1 00:09:33.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.232 job3: (groupid=0, jobs=1): err= 0: pid=851403: Thu Jul 25 14:11:02 2024 00:09:33.232 read: IOPS=22, BW=89.8KiB/s (91.9kB/s)(92.0KiB/1025msec) 00:09:33.232 slat (nsec): min=10028, max=39146, avg=25340.43, stdev=10761.17 00:09:33.232 clat (usec): min=312, max=41042, avg=39158.51, stdev=8469.06 00:09:33.232 lat (usec): min=331, max=41057, avg=39183.85, stdev=8470.45 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[ 314], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:33.232 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:33.232 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:33.232 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:33.232 | 99.99th=[41157] 00:09:33.232 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:09:33.232 slat (nsec): min=8160, max=58273, avg=19435.04, stdev=7287.45 00:09:33.232 clat (usec): min=160, max=329, avg=216.04, stdev=21.29 00:09:33.232 lat (usec): min=174, max=342, avg=235.48, stdev=18.99 00:09:33.232 clat percentiles (usec): 00:09:33.232 | 1.00th=[ 180], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 200], 00:09:33.232 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 215], 00:09:33.232 | 70.00th=[ 223], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 255], 00:09:33.232 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 330], 99.95th=[ 330], 00:09:33.232 | 99.99th=[ 330] 00:09:33.232 bw ( KiB/s): min= 4096, max= 4096, per=51.25%, avg=4096.00, stdev= 0.00, samples=1 00:09:33.232 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:33.232 lat (usec) : 250=89.16%, 500=6.73% 00:09:33.232 lat (msec) : 50=4.11% 00:09:33.232 cpu : usr=0.59%, sys=1.37%, ctx=536, majf=0, minf=1 00:09:33.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:33.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:33.232 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:33.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:33.232 00:09:33.232 Run status group 0 (all jobs): 00:09:33.232 READ: bw=343KiB/s (352kB/s), 83.6KiB/s-89.8KiB/s (85.6kB/s-91.9kB/s), io=352KiB (360kB), run=1005-1025msec 00:09:33.232 WRITE: bw=7992KiB/s (8184kB/s), 1998KiB/s-2038KiB/s (2046kB/s-2087kB/s), io=8192KiB (8389kB), run=1005-1025msec 00:09:33.232 00:09:33.232 Disk stats (read/write): 00:09:33.232 nvme0n1: ios=67/512, merge=0/0, ticks=727/110, in_queue=837, util=86.97% 00:09:33.232 nvme0n2: ios=67/512, merge=0/0, ticks=781/105, in_queue=886, util=90.64% 00:09:33.232 nvme0n3: ios=42/512, merge=0/0, ticks=1641/89, in_queue=1730, util=93.52% 00:09:33.232 nvme0n4: ios=41/512, merge=0/0, ticks=1600/106, in_queue=1706, util=94.32% 00:09:33.232 14:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:33.232 [global] 00:09:33.232 thread=1 00:09:33.232 invalidate=1 00:09:33.232 rw=randwrite 00:09:33.233 time_based=1 00:09:33.233 runtime=1 00:09:33.233 ioengine=libaio 00:09:33.233 direct=1 00:09:33.233 bs=4096 00:09:33.233 iodepth=1 00:09:33.233 norandommap=0 00:09:33.233 numjobs=1 00:09:33.233 00:09:33.233 verify_dump=1 00:09:33.233 verify_backlog=512 00:09:33.233 verify_state_save=0 00:09:33.233 do_verify=1 00:09:33.233 verify=crc32c-intel 00:09:33.233 [job0] 00:09:33.233 filename=/dev/nvme0n1 00:09:33.233 [job1] 00:09:33.233 filename=/dev/nvme0n2 00:09:33.233 [job2] 00:09:33.233 filename=/dev/nvme0n3 00:09:33.233 [job3] 00:09:33.233 filename=/dev/nvme0n4 00:09:33.233 Could not set queue depth (nvme0n1) 00:09:33.233 Could not set queue depth (nvme0n2) 00:09:33.233 Could not set queue depth (nvme0n3) 00:09:33.233 Could not set queue depth (nvme0n4) 00:09:33.491 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.491 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.491 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.491 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:33.491 fio-3.35 00:09:33.491 Starting 4 threads 00:09:34.869 00:09:34.869 job0: (groupid=0, jobs=1): err= 0: pid=851661: Thu Jul 25 14:11:04 2024 00:09:34.869 read: IOPS=2144, BW=8579KiB/s (8785kB/s)(8588KiB/1001msec) 00:09:34.869 slat (nsec): min=5607, max=66448, avg=12263.11, stdev=7880.92 00:09:34.869 clat (usec): min=166, max=795, avg=222.95, stdev=58.48 00:09:34.869 lat (usec): min=172, max=811, avg=235.22, stdev=63.17 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:09:34.869 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 208], 60.00th=[ 212], 00:09:34.869 | 70.00th=[ 219], 80.00th=[ 231], 90.00th=[ 273], 95.00th=[ 343], 00:09:34.869 | 99.00th=[ 482], 99.50th=[ 537], 99.90th=[ 725], 99.95th=[ 783], 00:09:34.869 | 99.99th=[ 799] 00:09:34.869 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:34.869 slat (nsec): min=7251, max=82448, avg=13499.17, stdev=7641.37 00:09:34.869 clat (usec): min=127, max=561, avg=172.90, stdev=47.26 00:09:34.869 lat (usec): min=135, max=636, avg=186.40, stdev=51.34 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:09:34.869 | 30.00th=[ 149], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 165], 00:09:34.869 | 70.00th=[ 176], 80.00th=[ 190], 90.00th=[ 215], 95.00th=[ 255], 00:09:34.869 | 99.00th=[ 396], 99.50th=[ 420], 99.90th=[ 478], 99.95th=[ 502], 00:09:34.869 | 99.99th=[ 562] 00:09:34.869 bw ( KiB/s): min= 9760, max= 9760, per=47.94%, avg=9760.00, stdev= 0.00, samples=1 00:09:34.869 iops : min= 2440, max= 2440, avg=2440.00, stdev= 0.00, samples=1 00:09:34.869 lat (usec) : 250=91.14%, 500=8.48%, 750=0.34%, 1000=0.04% 00:09:34.869 cpu : usr=3.90%, sys=5.70%, ctx=4708, majf=0, minf=1 00:09:34.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 issued rwts: total=2147,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.869 job1: (groupid=0, jobs=1): err= 0: pid=851682: Thu Jul 25 14:11:04 2024 00:09:34.869 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:09:34.869 slat (nsec): min=14804, max=49551, avg=22894.50, stdev=9702.29 00:09:34.869 clat (usec): min=352, max=41066, avg=39086.64, stdev=8652.28 00:09:34.869 lat (usec): min=370, max=41090, avg=39109.53, stdev=8653.47 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 355], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:34.869 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:34.869 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:34.869 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.869 | 99.99th=[41157] 00:09:34.869 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:09:34.869 slat (nsec): min=8221, max=73695, avg=20852.27, stdev=8880.26 00:09:34.869 clat (usec): min=150, max=670, avg=255.72, stdev=82.25 00:09:34.869 lat (usec): min=167, max=707, avg=276.57, stdev=86.17 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 155], 5.00th=[ 178], 10.00th=[ 192], 20.00th=[ 200], 00:09:34.869 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 227], 60.00th=[ 237], 00:09:34.869 | 70.00th=[ 251], 80.00th=[ 297], 90.00th=[ 396], 95.00th=[ 441], 00:09:34.869 | 99.00th=[ 490], 99.50th=[ 545], 99.90th=[ 668], 99.95th=[ 668], 00:09:34.869 | 99.99th=[ 668] 00:09:34.869 bw ( KiB/s): min= 4096, max= 4096, per=20.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.869 lat (usec) : 250=66.67%, 500=28.84%, 750=0.56% 00:09:34.869 lat (msec) : 50=3.93% 00:09:34.869 cpu : usr=0.90%, sys=1.19%, ctx=535, majf=0, minf=1 00:09:34.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.869 job2: (groupid=0, jobs=1): err= 0: pid=851717: Thu Jul 25 14:11:04 2024 00:09:34.869 read: IOPS=1344, BW=5379KiB/s (5508kB/s)(5384KiB/1001msec) 00:09:34.869 slat (nsec): min=5245, max=72894, avg=18411.39, stdev=10740.84 00:09:34.869 clat (usec): min=205, max=40999, avg=458.54, stdev=2215.11 00:09:34.869 lat (usec): min=215, max=41009, avg=476.95, stdev=2215.14 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 269], 00:09:34.869 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 326], 00:09:34.869 | 70.00th=[ 355], 80.00th=[ 445], 90.00th=[ 482], 95.00th=[ 502], 00:09:34.869 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[41157], 99.95th=[41157], 00:09:34.869 | 99.99th=[41157] 00:09:34.869 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:34.869 slat (nsec): min=7261, max=66321, avg=15046.12, stdev=7995.94 00:09:34.869 clat (usec): min=151, max=784, avg=209.10, stdev=34.45 00:09:34.869 lat (usec): min=168, max=805, avg=224.14, stdev=36.75 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 186], 20.00th=[ 192], 00:09:34.869 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:09:34.869 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 237], 95.00th=[ 251], 00:09:34.869 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 758], 99.95th=[ 783], 00:09:34.869 | 99.99th=[ 783] 00:09:34.869 bw ( KiB/s): min= 4096, max= 4096, per=20.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.869 lat (usec) : 250=54.86%, 500=42.51%, 750=2.43%, 1000=0.07% 00:09:34.869 lat (msec) : 50=0.14% 00:09:34.869 cpu : usr=2.40%, sys=5.10%, ctx=2883, majf=0, minf=2 00:09:34.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 issued rwts: total=1346,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.869 job3: (groupid=0, jobs=1): err= 0: pid=851730: Thu Jul 25 14:11:04 2024 00:09:34.869 read: IOPS=20, BW=83.9KiB/s (85.9kB/s)(84.0KiB/1001msec) 00:09:34.869 slat (nsec): min=15240, max=50570, avg=23019.43, stdev=9841.20 00:09:34.869 clat (usec): min=40867, max=41229, avg=40976.93, stdev=80.85 00:09:34.869 lat (usec): min=40893, max=41280, avg=40999.95, stdev=83.78 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:34.869 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:34.869 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:34.869 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.869 | 99.99th=[41157] 00:09:34.869 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:09:34.869 slat (nsec): min=8274, max=70546, avg=20393.14, stdev=10304.06 00:09:34.869 clat (usec): min=155, max=585, avg=246.35, stdev=62.35 00:09:34.869 lat (usec): min=179, max=601, avg=266.74, stdev=63.63 00:09:34.869 clat percentiles (usec): 00:09:34.869 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 204], 00:09:34.869 | 30.00th=[ 212], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 237], 00:09:34.869 | 70.00th=[ 249], 80.00th=[ 281], 90.00th=[ 338], 95.00th=[ 375], 00:09:34.869 | 99.00th=[ 449], 99.50th=[ 490], 99.90th=[ 586], 99.95th=[ 586], 00:09:34.869 | 99.99th=[ 586] 00:09:34.869 bw ( KiB/s): min= 4096, max= 4096, per=20.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.869 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.869 lat (usec) : 250=68.11%, 500=27.77%, 750=0.19% 00:09:34.869 lat (msec) : 50=3.94% 00:09:34.869 cpu : usr=0.30%, sys=1.20%, ctx=535, majf=0, minf=1 00:09:34.869 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.869 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.869 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.869 00:09:34.869 Run status group 0 (all jobs): 00:09:34.869 READ: bw=13.7MiB/s (14.4MB/s), 83.9KiB/s-8579KiB/s (85.9kB/s-8785kB/s), io=13.8MiB (14.5MB), run=1001-1006msec 00:09:34.869 WRITE: bw=19.9MiB/s (20.8MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1006msec 00:09:34.869 00:09:34.869 Disk stats (read/write): 00:09:34.869 nvme0n1: ios=1895/2048, merge=0/0, ticks=629/339, in_queue=968, util=99.00% 00:09:34.869 nvme0n2: ios=58/512, merge=0/0, ticks=1003/124, in_queue=1127, util=97.15% 00:09:34.869 nvme0n3: ios=1050/1296, merge=0/0, ticks=1433/259, in_queue=1692, util=97.48% 00:09:34.869 nvme0n4: ios=40/512, merge=0/0, ticks=1641/113, in_queue=1754, util=97.36% 00:09:34.869 14:11:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:34.869 [global] 00:09:34.869 thread=1 00:09:34.869 invalidate=1 00:09:34.869 rw=write 00:09:34.869 time_based=1 00:09:34.870 runtime=1 00:09:34.870 ioengine=libaio 00:09:34.870 direct=1 00:09:34.870 bs=4096 00:09:34.870 iodepth=128 00:09:34.870 norandommap=0 00:09:34.870 numjobs=1 00:09:34.870 00:09:34.870 verify_dump=1 00:09:34.870 verify_backlog=512 00:09:34.870 verify_state_save=0 00:09:34.870 do_verify=1 00:09:34.870 verify=crc32c-intel 00:09:34.870 [job0] 00:09:34.870 filename=/dev/nvme0n1 00:09:34.870 [job1] 00:09:34.870 filename=/dev/nvme0n2 00:09:34.870 [job2] 00:09:34.870 filename=/dev/nvme0n3 00:09:34.870 [job3] 00:09:34.870 filename=/dev/nvme0n4 00:09:34.870 Could not set queue depth (nvme0n1) 00:09:34.870 Could not set queue depth (nvme0n2) 00:09:34.870 Could not set queue depth (nvme0n3) 00:09:34.870 Could not set queue depth (nvme0n4) 00:09:34.870 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.870 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.870 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.870 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:34.870 fio-3.35 00:09:34.870 Starting 4 threads 00:09:36.251 00:09:36.251 job0: (groupid=0, jobs=1): err= 0: pid=851980: Thu Jul 25 14:11:05 2024 00:09:36.251 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:09:36.251 slat (usec): min=2, max=10941, avg=90.45, stdev=551.50 00:09:36.251 clat (usec): min=5988, max=44097, avg=11697.89, stdev=3669.09 00:09:36.251 lat (usec): min=5995, max=44101, avg=11788.34, stdev=3712.29 00:09:36.251 clat percentiles (usec): 00:09:36.251 | 1.00th=[ 7111], 5.00th=[ 8160], 10.00th=[ 9241], 20.00th=[10028], 00:09:36.251 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[11207], 00:09:36.251 | 70.00th=[11731], 80.00th=[12387], 90.00th=[13960], 95.00th=[17433], 00:09:36.251 | 99.00th=[32637], 99.50th=[32637], 99.90th=[36963], 99.95th=[36963], 00:09:36.251 | 99.99th=[44303] 00:09:36.251 write: IOPS=4675, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1003msec); 0 zone resets 00:09:36.251 slat (usec): min=3, max=15504, avg=113.93, stdev=728.39 00:09:36.252 clat (usec): min=227, max=95716, avg=15649.39, stdev=14687.38 00:09:36.252 lat (usec): min=654, max=95746, avg=15763.33, stdev=14782.99 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 988], 5.00th=[ 6063], 10.00th=[ 8586], 20.00th=[ 9896], 00:09:36.252 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:09:36.252 | 70.00th=[14484], 80.00th=[17957], 90.00th=[24773], 95.00th=[48497], 00:09:36.252 | 99.00th=[89654], 99.50th=[92799], 99.90th=[95945], 99.95th=[95945], 00:09:36.252 | 99.99th=[95945] 00:09:36.252 bw ( KiB/s): min=16384, max=20480, per=29.51%, avg=18432.00, stdev=2896.31, samples=2 00:09:36.252 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:36.252 lat (usec) : 250=0.01%, 750=0.06%, 1000=0.58% 00:09:36.252 lat (msec) : 2=0.37%, 4=0.72%, 10=20.48%, 20=69.86%, 50=5.68% 00:09:36.252 lat (msec) : 100=2.24% 00:09:36.252 cpu : usr=3.99%, sys=6.79%, ctx=349, majf=0, minf=1 00:09:36.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:36.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.252 issued rwts: total=4608,4690,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.252 job1: (groupid=0, jobs=1): err= 0: pid=851981: Thu Jul 25 14:11:05 2024 00:09:36.252 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:09:36.252 slat (usec): min=2, max=9569, avg=87.96, stdev=565.88 00:09:36.252 clat (usec): min=1770, max=46011, avg=11686.00, stdev=5117.73 00:09:36.252 lat (usec): min=1780, max=46029, avg=11773.96, stdev=5156.59 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 2212], 5.00th=[ 4080], 10.00th=[ 8029], 20.00th=[ 9765], 00:09:36.252 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:09:36.252 | 70.00th=[11600], 80.00th=[12780], 90.00th=[16450], 95.00th=[21103], 00:09:36.252 | 99.00th=[34341], 99.50th=[40109], 99.90th=[45351], 99.95th=[45351], 00:09:36.252 | 99.99th=[45876] 00:09:36.252 write: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec); 0 zone resets 00:09:36.252 slat (usec): min=3, max=8907, avg=101.46, stdev=576.44 00:09:36.252 clat (usec): min=1052, max=46571, avg=14427.52, stdev=8737.10 00:09:36.252 lat (usec): min=1058, max=46590, avg=14528.97, stdev=8796.63 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 3392], 5.00th=[ 6980], 10.00th=[ 8029], 20.00th=[ 9110], 00:09:36.252 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:09:36.252 | 70.00th=[14746], 80.00th=[18220], 90.00th=[28181], 95.00th=[34341], 00:09:36.252 | 99.00th=[44827], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:09:36.252 | 99.99th=[46400] 00:09:36.252 bw ( KiB/s): min=18288, max=21488, per=31.84%, avg=19888.00, stdev=2262.74, samples=2 00:09:36.252 iops : min= 4572, max= 5372, avg=4972.00, stdev=565.69, samples=2 00:09:36.252 lat (msec) : 2=0.31%, 4=2.52%, 10=24.44%, 20=60.90%, 50=11.83% 00:09:36.252 cpu : usr=5.77%, sys=8.76%, ctx=449, majf=0, minf=1 00:09:36.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:36.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.252 issued rwts: total=4608,5100,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.252 job2: (groupid=0, jobs=1): err= 0: pid=851982: Thu Jul 25 14:11:05 2024 00:09:36.252 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:09:36.252 slat (usec): min=3, max=13286, avg=138.23, stdev=843.50 00:09:36.252 clat (usec): min=4884, max=50335, avg=15997.21, stdev=6961.15 00:09:36.252 lat (usec): min=4893, max=50352, avg=16135.44, stdev=7026.18 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 6194], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:09:36.252 | 30.00th=[12780], 40.00th=[13173], 50.00th=[13566], 60.00th=[13960], 00:09:36.252 | 70.00th=[15139], 80.00th=[18482], 90.00th=[25822], 95.00th=[31851], 00:09:36.252 | 99.00th=[42730], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:09:36.252 | 99.99th=[50594] 00:09:36.252 write: IOPS=3400, BW=13.3MiB/s (13.9MB/s)(13.4MiB/1011msec); 0 zone resets 00:09:36.252 slat (usec): min=4, max=23485, avg=150.04, stdev=736.71 00:09:36.252 clat (usec): min=652, max=50283, avg=22510.07, stdev=9154.32 00:09:36.252 lat (usec): min=657, max=50291, avg=22660.10, stdev=9215.60 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 4752], 5.00th=[ 7570], 10.00th=[ 9372], 20.00th=[13304], 00:09:36.252 | 30.00th=[19006], 40.00th=[21890], 50.00th=[22676], 60.00th=[23987], 00:09:36.252 | 70.00th=[26084], 80.00th=[31065], 90.00th=[34866], 95.00th=[37487], 00:09:36.252 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[50070], 00:09:36.252 | 99.99th=[50070] 00:09:36.252 bw ( KiB/s): min=12168, max=14320, per=21.20%, avg=13244.00, stdev=1521.69, samples=2 00:09:36.252 iops : min= 3042, max= 3580, avg=3311.00, stdev=380.42, samples=2 00:09:36.252 lat (usec) : 750=0.08% 00:09:36.252 lat (msec) : 4=0.18%, 10=8.65%, 20=48.22%, 50=42.76%, 100=0.11% 00:09:36.252 cpu : usr=5.74%, sys=6.83%, ctx=423, majf=0, minf=1 00:09:36.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:36.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.252 issued rwts: total=3072,3438,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.252 job3: (groupid=0, jobs=1): err= 0: pid=851983: Thu Jul 25 14:11:05 2024 00:09:36.252 read: IOPS=2323, BW=9292KiB/s (9515kB/s)(9348KiB/1006msec) 00:09:36.252 slat (usec): min=2, max=26342, avg=246.99, stdev=1593.16 00:09:36.252 clat (usec): min=3222, max=69232, avg=29855.68, stdev=13514.52 00:09:36.252 lat (usec): min=8896, max=69237, avg=30102.67, stdev=13516.88 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[ 9241], 5.00th=[18220], 10.00th=[20055], 20.00th=[21890], 00:09:36.252 | 30.00th=[22414], 40.00th=[22938], 50.00th=[24249], 60.00th=[25560], 00:09:36.252 | 70.00th=[28181], 80.00th=[35914], 90.00th=[55313], 95.00th=[62129], 00:09:36.252 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:09:36.252 | 99.99th=[69731] 00:09:36.252 write: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec); 0 zone resets 00:09:36.252 slat (usec): min=3, max=14047, avg=158.34, stdev=961.85 00:09:36.252 clat (usec): min=9916, max=59661, avg=22282.70, stdev=11318.88 00:09:36.252 lat (usec): min=10034, max=59671, avg=22441.04, stdev=11337.82 00:09:36.252 clat percentiles (usec): 00:09:36.252 | 1.00th=[13042], 5.00th=[13173], 10.00th=[13435], 20.00th=[14484], 00:09:36.252 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17695], 60.00th=[18744], 00:09:36.252 | 70.00th=[19792], 80.00th=[25822], 90.00th=[43254], 95.00th=[47973], 00:09:36.252 | 99.00th=[59507], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:09:36.252 | 99.99th=[59507] 00:09:36.252 bw ( KiB/s): min= 8440, max=12040, per=16.39%, avg=10240.00, stdev=2545.58, samples=2 00:09:36.252 iops : min= 2110, max= 3010, avg=2560.00, stdev=636.40, samples=2 00:09:36.252 lat (msec) : 4=0.02%, 10=0.71%, 20=42.13%, 50=48.48%, 100=8.66% 00:09:36.252 cpu : usr=2.39%, sys=3.28%, ctx=156, majf=0, minf=1 00:09:36.252 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:36.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.252 issued rwts: total=2337,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.252 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.252 00:09:36.252 Run status group 0 (all jobs): 00:09:36.252 READ: bw=56.5MiB/s (59.3MB/s), 9292KiB/s-17.9MiB/s (9515kB/s-18.8MB/s), io=57.1MiB (59.9MB), run=1003-1011msec 00:09:36.252 WRITE: bw=61.0MiB/s (64.0MB/s), 9.94MiB/s-19.8MiB/s (10.4MB/s-20.8MB/s), io=61.7MiB (64.7MB), run=1003-1011msec 00:09:36.252 00:09:36.252 Disk stats (read/write): 00:09:36.252 nvme0n1: ios=3634/3838, merge=0/0, ticks=24400/36072, in_queue=60472, util=90.68% 00:09:36.252 nvme0n2: ios=3927/4096, merge=0/0, ticks=36366/53303, in_queue=89669, util=98.07% 00:09:36.252 nvme0n3: ios=2618/2935, merge=0/0, ticks=38396/62420, in_queue=100816, util=98.12% 00:09:36.252 nvme0n4: ios=2106/2176, merge=0/0, ticks=16990/10525, in_queue=27515, util=98.21% 00:09:36.252 14:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:36.252 [global] 00:09:36.252 thread=1 00:09:36.252 invalidate=1 00:09:36.252 rw=randwrite 00:09:36.252 time_based=1 00:09:36.252 runtime=1 00:09:36.252 ioengine=libaio 00:09:36.252 direct=1 00:09:36.252 bs=4096 00:09:36.252 iodepth=128 00:09:36.252 norandommap=0 00:09:36.252 numjobs=1 00:09:36.252 00:09:36.252 verify_dump=1 00:09:36.252 verify_backlog=512 00:09:36.252 verify_state_save=0 00:09:36.252 do_verify=1 00:09:36.252 verify=crc32c-intel 00:09:36.252 [job0] 00:09:36.252 filename=/dev/nvme0n1 00:09:36.252 [job1] 00:09:36.252 filename=/dev/nvme0n2 00:09:36.252 [job2] 00:09:36.252 filename=/dev/nvme0n3 00:09:36.252 [job3] 00:09:36.252 filename=/dev/nvme0n4 00:09:36.252 Could not set queue depth (nvme0n1) 00:09:36.252 Could not set queue depth (nvme0n2) 00:09:36.252 Could not set queue depth (nvme0n3) 00:09:36.252 Could not set queue depth (nvme0n4) 00:09:36.253 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.253 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.253 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.253 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.253 fio-3.35 00:09:36.253 Starting 4 threads 00:09:37.629 00:09:37.629 job0: (groupid=0, jobs=1): err= 0: pid=852214: Thu Jul 25 14:11:07 2024 00:09:37.629 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec) 00:09:37.629 slat (usec): min=3, max=13953, avg=110.39, stdev=678.26 00:09:37.629 clat (usec): min=8502, max=69226, avg=14782.49, stdev=7442.69 00:09:37.629 lat (usec): min=8507, max=71819, avg=14892.89, stdev=7504.42 00:09:37.629 clat percentiles (usec): 00:09:37.629 | 1.00th=[ 8848], 5.00th=[10683], 10.00th=[11469], 20.00th=[11863], 00:09:37.629 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12649], 60.00th=[12780], 00:09:37.629 | 70.00th=[13042], 80.00th=[14746], 90.00th=[20579], 95.00th=[26870], 00:09:37.629 | 99.00th=[51643], 99.50th=[52167], 99.90th=[69731], 99.95th=[69731], 00:09:37.629 | 99.99th=[69731] 00:09:37.629 write: IOPS=2980, BW=11.6MiB/s (12.2MB/s)(11.7MiB/1009msec); 0 zone resets 00:09:37.629 slat (usec): min=5, max=24105, avg=228.28, stdev=1159.59 00:09:37.629 clat (msec): min=5, max=105, avg=29.92, stdev=17.91 00:09:37.629 lat (msec): min=5, max=105, avg=30.15, stdev=18.00 00:09:37.629 clat percentiles (msec): 00:09:37.629 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 20], 00:09:37.629 | 30.00th=[ 20], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 26], 00:09:37.629 | 70.00th=[ 31], 80.00th=[ 41], 90.00th=[ 55], 95.00th=[ 56], 00:09:37.629 | 99.00th=[ 103], 99.50th=[ 104], 99.90th=[ 106], 99.95th=[ 106], 00:09:37.629 | 99.99th=[ 106] 00:09:37.629 bw ( KiB/s): min=10752, max=12288, per=20.84%, avg=11520.00, stdev=1086.12, samples=2 00:09:37.629 iops : min= 2688, max= 3072, avg=2880.00, stdev=271.53, samples=2 00:09:37.629 lat (msec) : 10=1.42%, 20=63.12%, 50=28.06%, 100=6.65%, 250=0.75% 00:09:37.629 cpu : usr=3.87%, sys=6.05%, ctx=389, majf=0, minf=7 00:09:37.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:37.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.629 issued rwts: total=2560,3007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.629 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.629 job1: (groupid=0, jobs=1): err= 0: pid=852215: Thu Jul 25 14:11:07 2024 00:09:37.629 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:09:37.629 slat (usec): min=3, max=13517, avg=118.40, stdev=782.99 00:09:37.629 clat (usec): min=8089, max=46934, avg=14944.66, stdev=6321.37 00:09:37.629 lat (usec): min=8098, max=46972, avg=15063.06, stdev=6381.10 00:09:37.629 clat percentiles (usec): 00:09:37.629 | 1.00th=[10028], 5.00th=[10552], 10.00th=[11338], 20.00th=[11469], 00:09:37.629 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13042], 00:09:37.629 | 70.00th=[13435], 80.00th=[15270], 90.00th=[22676], 95.00th=[33424], 00:09:37.629 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[45876], 00:09:37.629 | 99.99th=[46924] 00:09:37.629 write: IOPS=3352, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1009msec); 0 zone resets 00:09:37.629 slat (usec): min=3, max=14562, avg=178.83, stdev=1045.02 00:09:37.629 clat (msec): min=6, max=108, avg=24.14, stdev=17.18 00:09:37.629 lat (msec): min=6, max=108, avg=24.31, stdev=17.29 00:09:37.629 clat percentiles (msec): 00:09:37.629 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:09:37.629 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 23], 00:09:37.629 | 70.00th=[ 25], 80.00th=[ 27], 90.00th=[ 40], 95.00th=[ 57], 00:09:37.629 | 99.00th=[ 102], 99.50th=[ 106], 99.90th=[ 109], 99.95th=[ 109], 00:09:37.629 | 99.99th=[ 109] 00:09:37.629 bw ( KiB/s): min= 9208, max=16840, per=23.56%, avg=13024.00, stdev=5396.64, samples=2 00:09:37.629 iops : min= 2302, max= 4210, avg=3256.00, stdev=1349.16, samples=2 00:09:37.629 lat (msec) : 10=1.81%, 20=61.75%, 50=32.75%, 100=3.10%, 250=0.59% 00:09:37.629 cpu : usr=3.87%, sys=6.05%, ctx=292, majf=0, minf=19 00:09:37.629 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:37.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.630 issued rwts: total=3072,3383,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.630 job2: (groupid=0, jobs=1): err= 0: pid=852216: Thu Jul 25 14:11:07 2024 00:09:37.630 read: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec) 00:09:37.630 slat (usec): min=3, max=13466, avg=120.75, stdev=761.72 00:09:37.630 clat (usec): min=4786, max=46188, avg=13990.60, stdev=5982.89 00:09:37.630 lat (usec): min=4798, max=46196, avg=14111.35, stdev=6044.86 00:09:37.630 clat percentiles (usec): 00:09:37.630 | 1.00th=[ 6063], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11076], 00:09:37.630 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11469], 60.00th=[12125], 00:09:37.630 | 70.00th=[14353], 80.00th=[16319], 90.00th=[21103], 95.00th=[27132], 00:09:37.630 | 99.00th=[40109], 99.50th=[44827], 99.90th=[46400], 99.95th=[46400], 00:09:37.630 | 99.99th=[46400] 00:09:37.630 write: IOPS=3994, BW=15.6MiB/s (16.4MB/s)(15.8MiB/1015msec); 0 zone resets 00:09:37.630 slat (usec): min=5, max=12605, avg=129.89, stdev=605.10 00:09:37.630 clat (usec): min=2652, max=46139, avg=19373.78, stdev=8398.27 00:09:37.630 lat (usec): min=2660, max=46150, avg=19503.67, stdev=8459.12 00:09:37.630 clat percentiles (usec): 00:09:37.630 | 1.00th=[ 4293], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[10945], 00:09:37.630 | 30.00th=[13829], 40.00th=[16581], 50.00th=[19268], 60.00th=[19792], 00:09:37.630 | 70.00th=[20579], 80.00th=[27395], 90.00th=[32900], 95.00th=[35914], 00:09:37.630 | 99.00th=[37487], 99.50th=[37487], 99.90th=[40633], 99.95th=[45351], 00:09:37.630 | 99.99th=[46400] 00:09:37.630 bw ( KiB/s): min=15600, max=15816, per=28.41%, avg=15708.00, stdev=152.74, samples=2 00:09:37.630 iops : min= 3900, max= 3954, avg=3927.00, stdev=38.18, samples=2 00:09:37.630 lat (msec) : 4=0.34%, 10=9.66%, 20=66.23%, 50=23.76% 00:09:37.630 cpu : usr=5.33%, sys=7.50%, ctx=445, majf=0, minf=13 00:09:37.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:37.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.630 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.630 job3: (groupid=0, jobs=1): err= 0: pid=852217: Thu Jul 25 14:11:07 2024 00:09:37.630 read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(12.2MiB/1015msec) 00:09:37.630 slat (usec): min=3, max=13689, avg=137.39, stdev=834.02 00:09:37.630 clat (usec): min=5286, max=55695, avg=15774.18, stdev=7107.65 00:09:37.630 lat (usec): min=5295, max=55713, avg=15911.57, stdev=7183.47 00:09:37.630 clat percentiles (usec): 00:09:37.630 | 1.00th=[ 7570], 5.00th=[10421], 10.00th=[10945], 20.00th=[11469], 00:09:37.630 | 30.00th=[11731], 40.00th=[13042], 50.00th=[14222], 60.00th=[14746], 00:09:37.630 | 70.00th=[15795], 80.00th=[18220], 90.00th=[22152], 95.00th=[31065], 00:09:37.630 | 99.00th=[47973], 99.50th=[50070], 99.90th=[55837], 99.95th=[55837], 00:09:37.630 | 99.99th=[55837] 00:09:37.630 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets 00:09:37.630 slat (usec): min=4, max=9126, avg=149.16, stdev=714.80 00:09:37.630 clat (usec): min=2855, max=55718, avg=21906.88, stdev=11371.92 00:09:37.630 lat (usec): min=2863, max=55741, avg=22056.04, stdev=11453.63 00:09:37.630 clat percentiles (usec): 00:09:37.630 | 1.00th=[ 5145], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[12911], 00:09:37.630 | 30.00th=[15008], 40.00th=[16319], 50.00th=[18744], 60.00th=[20055], 00:09:37.630 | 70.00th=[25035], 80.00th=[33817], 90.00th=[41157], 95.00th=[44827], 00:09:37.630 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53216], 99.95th=[55837], 00:09:37.630 | 99.99th=[55837] 00:09:37.630 bw ( KiB/s): min=11528, max=16624, per=25.46%, avg=14076.00, stdev=3603.42, samples=2 00:09:37.630 iops : min= 2882, max= 4156, avg=3519.00, stdev=900.85, samples=2 00:09:37.630 lat (msec) : 4=0.39%, 10=4.97%, 20=65.64%, 50=28.24%, 100=0.76% 00:09:37.630 cpu : usr=5.62%, sys=5.52%, ctx=349, majf=0, minf=13 00:09:37.630 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:37.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:37.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:37.630 issued rwts: total=3134,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:37.630 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:37.630 00:09:37.630 Run status group 0 (all jobs): 00:09:37.630 READ: bw=47.5MiB/s (49.8MB/s), 9.91MiB/s-13.8MiB/s (10.4MB/s-14.5MB/s), io=48.2MiB (50.6MB), run=1009-1015msec 00:09:37.630 WRITE: bw=54.0MiB/s (56.6MB/s), 11.6MiB/s-15.6MiB/s (12.2MB/s-16.4MB/s), io=54.8MiB (57.5MB), run=1009-1015msec 00:09:37.630 00:09:37.630 Disk stats (read/write): 00:09:37.630 nvme0n1: ios=2098/2543, merge=0/0, ticks=13131/38795, in_queue=51926, util=86.97% 00:09:37.630 nvme0n2: ios=2587/2741, merge=0/0, ticks=19083/30207, in_queue=49290, util=97.36% 00:09:37.630 nvme0n3: ios=3095/3415, merge=0/0, ticks=41083/63219, in_queue=104302, util=97.18% 00:09:37.630 nvme0n4: ios=2613/3072, merge=0/0, ticks=30937/55490, in_queue=86427, util=97.16% 00:09:37.630 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:37.630 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=852353 00:09:37.630 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:37.630 14:11:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:37.630 [global] 00:09:37.630 thread=1 00:09:37.630 invalidate=1 00:09:37.630 rw=read 00:09:37.630 time_based=1 00:09:37.630 runtime=10 00:09:37.630 ioengine=libaio 00:09:37.630 direct=1 00:09:37.630 bs=4096 00:09:37.630 iodepth=1 00:09:37.630 norandommap=1 00:09:37.630 numjobs=1 00:09:37.630 00:09:37.630 [job0] 00:09:37.630 filename=/dev/nvme0n1 00:09:37.630 [job1] 00:09:37.630 filename=/dev/nvme0n2 00:09:37.630 [job2] 00:09:37.630 filename=/dev/nvme0n3 00:09:37.630 [job3] 00:09:37.630 filename=/dev/nvme0n4 00:09:37.630 Could not set queue depth (nvme0n1) 00:09:37.630 Could not set queue depth (nvme0n2) 00:09:37.630 Could not set queue depth (nvme0n3) 00:09:37.630 Could not set queue depth (nvme0n4) 00:09:37.630 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.630 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.630 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.630 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:37.630 fio-3.35 00:09:37.630 Starting 4 threads 00:09:40.913 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:40.913 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:40.913 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=36618240, buflen=4096 00:09:40.913 fio: pid=852450, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.171 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.171 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:41.171 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=8437760, buflen=4096 00:09:41.171 fio: pid=852449, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.429 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=45408256, buflen=4096 00:09:41.429 fio: pid=852447, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.429 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.429 14:11:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:41.688 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=54587392, buflen=4096 00:09:41.688 fio: pid=852448, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.688 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.688 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:41.688 00:09:41.688 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=852447: Thu Jul 25 14:11:11 2024 00:09:41.688 read: IOPS=3208, BW=12.5MiB/s (13.1MB/s)(43.3MiB/3455msec) 00:09:41.688 slat (usec): min=5, max=15737, avg=13.97, stdev=192.43 00:09:41.688 clat (usec): min=187, max=41273, avg=292.69, stdev=393.77 00:09:41.688 lat (usec): min=193, max=41278, avg=306.67, stdev=438.51 00:09:41.688 clat percentiles (usec): 00:09:41.688 | 1.00th=[ 204], 5.00th=[ 221], 10.00th=[ 233], 20.00th=[ 253], 00:09:41.688 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 297], 00:09:41.688 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 330], 95.00th=[ 351], 00:09:41.688 | 99.00th=[ 506], 99.50th=[ 537], 99.90th=[ 889], 99.95th=[ 1270], 00:09:41.688 | 99.99th=[ 2089] 00:09:41.688 bw ( KiB/s): min=11552, max=13448, per=33.78%, avg=12832.00, stdev=662.52, samples=6 00:09:41.688 iops : min= 2888, max= 3362, avg=3208.00, stdev=165.63, samples=6 00:09:41.688 lat (usec) : 250=18.66%, 500=80.14%, 750=1.07%, 1000=0.05% 00:09:41.688 lat (msec) : 2=0.05%, 4=0.01%, 50=0.01% 00:09:41.688 cpu : usr=2.32%, sys=5.94%, ctx=11090, majf=0, minf=1 00:09:41.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 issued rwts: total=11087,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.688 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=852448: Thu Jul 25 14:11:11 2024 00:09:41.688 read: IOPS=3574, BW=14.0MiB/s (14.6MB/s)(52.1MiB/3729msec) 00:09:41.688 slat (usec): min=4, max=15097, avg=16.81, stdev=249.30 00:09:41.688 clat (usec): min=158, max=40974, avg=258.36, stdev=700.92 00:09:41.688 lat (usec): min=163, max=40982, avg=275.18, stdev=744.39 00:09:41.688 clat percentiles (usec): 00:09:41.688 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 215], 00:09:41.688 | 30.00th=[ 225], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:09:41.688 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 314], 00:09:41.688 | 99.00th=[ 453], 99.50th=[ 498], 99.90th=[ 693], 99.95th=[ 1713], 00:09:41.688 | 99.99th=[40633] 00:09:41.688 bw ( KiB/s): min=12600, max=15856, per=37.68%, avg=14315.14, stdev=1301.60, samples=7 00:09:41.688 iops : min= 3150, max= 3964, avg=3578.71, stdev=325.49, samples=7 00:09:41.688 lat (usec) : 250=64.56%, 500=34.94%, 750=0.41%, 1000=0.01% 00:09:41.688 lat (msec) : 2=0.04%, 4=0.01%, 50=0.03% 00:09:41.688 cpu : usr=2.47%, sys=6.12%, ctx=13336, majf=0, minf=1 00:09:41.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 issued rwts: total=13328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.688 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=852449: Thu Jul 25 14:11:11 2024 00:09:41.688 read: IOPS=637, BW=2549KiB/s (2610kB/s)(8240KiB/3233msec) 00:09:41.688 slat (nsec): min=5015, max=80853, avg=13929.68, stdev=6230.89 00:09:41.688 clat (usec): min=210, max=53066, avg=1539.49, stdev=7014.20 00:09:41.688 lat (usec): min=219, max=53079, avg=1553.42, stdev=7015.97 00:09:41.688 clat percentiles (usec): 00:09:41.688 | 1.00th=[ 231], 5.00th=[ 253], 10.00th=[ 273], 20.00th=[ 289], 00:09:41.688 | 30.00th=[ 297], 40.00th=[ 306], 50.00th=[ 314], 60.00th=[ 318], 00:09:41.688 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 457], 00:09:41.688 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:41.688 | 99.99th=[53216] 00:09:41.688 bw ( KiB/s): min= 96, max= 8336, per=7.21%, avg=2740.00, stdev=4100.70, samples=6 00:09:41.688 iops : min= 24, max= 2084, avg=685.00, stdev=1025.17, samples=6 00:09:41.688 lat (usec) : 250=4.22%, 500=92.04%, 750=0.68% 00:09:41.688 lat (msec) : 10=0.05%, 50=2.91%, 100=0.05% 00:09:41.688 cpu : usr=0.28%, sys=1.39%, ctx=2061, majf=0, minf=1 00:09:41.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.688 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=852450: Thu Jul 25 14:11:11 2024 00:09:41.688 read: IOPS=3060, BW=12.0MiB/s (12.5MB/s)(34.9MiB/2921msec) 00:09:41.688 slat (nsec): min=5780, max=53535, avg=12165.83, stdev=5244.76 00:09:41.688 clat (usec): min=190, max=41178, avg=308.96, stdev=1186.25 00:09:41.688 lat (usec): min=198, max=41192, avg=321.12, stdev=1186.65 00:09:41.688 clat percentiles (usec): 00:09:41.688 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 235], 00:09:41.688 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 281], 00:09:41.688 | 70.00th=[ 293], 80.00th=[ 310], 90.00th=[ 322], 95.00th=[ 330], 00:09:41.688 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[11076], 99.95th=[41157], 00:09:41.688 | 99.99th=[41157] 00:09:41.688 bw ( KiB/s): min= 4752, max=14328, per=30.97%, avg=11764.80, stdev=3973.52, samples=5 00:09:41.688 iops : min= 1188, max= 3582, avg=2941.20, stdev=993.38, samples=5 00:09:41.688 lat (usec) : 250=30.99%, 500=68.76%, 750=0.07%, 1000=0.03% 00:09:41.688 lat (msec) : 2=0.02%, 4=0.01%, 20=0.01%, 50=0.09% 00:09:41.688 cpu : usr=2.33%, sys=5.86%, ctx=8944, majf=0, minf=1 00:09:41.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:41.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:41.688 issued rwts: total=8941,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:41.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:41.688 00:09:41.688 Run status group 0 (all jobs): 00:09:41.688 READ: bw=37.1MiB/s (38.9MB/s), 2549KiB/s-14.0MiB/s (2610kB/s-14.6MB/s), io=138MiB (145MB), run=2921-3729msec 00:09:41.688 00:09:41.688 Disk stats (read/write): 00:09:41.688 nvme0n1: ios=10873/0, merge=0/0, ticks=3287/0, in_queue=3287, util=99.08% 00:09:41.688 nvme0n2: ios=12891/0, merge=0/0, ticks=4085/0, in_queue=4085, util=98.07% 00:09:41.688 nvme0n3: ios=2057/0, merge=0/0, ticks=3026/0, in_queue=3026, util=96.79% 00:09:41.689 nvme0n4: ios=8825/0, merge=0/0, ticks=2829/0, in_queue=2829, util=99.80% 00:09:41.947 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.947 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:42.204 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.204 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:42.462 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.462 14:11:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:42.719 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.719 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 852353 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:42.976 nvmf hotplug test: fio failed as expected 00:09:42.976 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.234 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:43.234 rmmod nvme_tcp 00:09:43.234 rmmod nvme_fabrics 00:09:43.234 rmmod nvme_keyring 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 850345 ']' 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 850345 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 850345 ']' 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 850345 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 850345 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 850345' 00:09:43.492 killing process with pid 850345 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 850345 00:09:43.492 14:11:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 850345 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.751 14:11:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:45.693 00:09:45.693 real 0m23.714s 00:09:45.693 user 1m22.255s 00:09:45.693 sys 0m7.466s 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.693 ************************************ 00:09:45.693 END TEST nvmf_fio_target 00:09:45.693 ************************************ 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:45.693 ************************************ 00:09:45.693 START TEST nvmf_bdevio 00:09:45.693 ************************************ 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:45.693 * Looking for test storage... 00:09:45.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:45.693 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.954 14:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.860 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:09:47.861 00:09:47.861 --- 10.0.0.2 ping statistics --- 00:09:47.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.861 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:09:47.861 00:09:47.861 --- 10.0.0.1 ping statistics --- 00:09:47.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.861 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=855076 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 855076 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 855076 ']' 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:47.861 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.120 [2024-07-25 14:11:17.559371] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:09:48.120 [2024-07-25 14:11:17.559454] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.120 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.120 [2024-07-25 14:11:17.622444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.120 [2024-07-25 14:11:17.730646] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.120 [2024-07-25 14:11:17.730730] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.120 [2024-07-25 14:11:17.730745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.120 [2024-07-25 14:11:17.730757] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.120 [2024-07-25 14:11:17.730768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.120 [2024-07-25 14:11:17.730870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:48.120 [2024-07-25 14:11:17.730924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:48.120 [2024-07-25 14:11:17.730951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:48.120 [2024-07-25 14:11:17.730954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 [2024-07-25 14:11:17.889573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 Malloc0 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.380 [2024-07-25 14:11:17.943144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:48.380 { 00:09:48.380 "params": { 00:09:48.380 "name": "Nvme$subsystem", 00:09:48.380 "trtype": "$TEST_TRANSPORT", 00:09:48.380 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:48.380 "adrfam": "ipv4", 00:09:48.380 "trsvcid": "$NVMF_PORT", 00:09:48.380 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:48.380 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:48.380 "hdgst": ${hdgst:-false}, 00:09:48.380 "ddgst": ${ddgst:-false} 00:09:48.380 }, 00:09:48.380 "method": "bdev_nvme_attach_controller" 00:09:48.380 } 00:09:48.380 EOF 00:09:48.380 )") 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:48.380 14:11:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:48.380 "params": { 00:09:48.380 "name": "Nvme1", 00:09:48.380 "trtype": "tcp", 00:09:48.380 "traddr": "10.0.0.2", 00:09:48.380 "adrfam": "ipv4", 00:09:48.380 "trsvcid": "4420", 00:09:48.380 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:48.380 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:48.380 "hdgst": false, 00:09:48.380 "ddgst": false 00:09:48.380 }, 00:09:48.380 "method": "bdev_nvme_attach_controller" 00:09:48.380 }' 00:09:48.380 [2024-07-25 14:11:17.990987] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:09:48.380 [2024-07-25 14:11:17.991079] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855214 ] 00:09:48.380 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.640 [2024-07-25 14:11:18.051305] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:48.640 [2024-07-25 14:11:18.166495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.640 [2024-07-25 14:11:18.166550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:48.640 [2024-07-25 14:11:18.166554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.899 I/O targets: 00:09:48.899 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:48.899 00:09:48.899 00:09:48.899 CUnit - A unit testing framework for C - Version 2.1-3 00:09:48.899 http://cunit.sourceforge.net/ 00:09:48.899 00:09:48.899 00:09:48.899 Suite: bdevio tests on: Nvme1n1 00:09:49.158 Test: blockdev write read block ...passed 00:09:49.158 Test: blockdev write zeroes read block ...passed 00:09:49.158 Test: blockdev write zeroes read no split ...passed 00:09:49.158 Test: blockdev write zeroes read split ...passed 00:09:49.158 Test: blockdev write zeroes read split partial ...passed 00:09:49.158 Test: blockdev reset ...[2024-07-25 14:11:18.699900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:49.158 [2024-07-25 14:11:18.700013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a93580 (9): Bad file descriptor 00:09:49.158 [2024-07-25 14:11:18.715378] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:49.158 passed 00:09:49.158 Test: blockdev write read 8 blocks ...passed 00:09:49.158 Test: blockdev write read size > 128k ...passed 00:09:49.158 Test: blockdev write read invalid size ...passed 00:09:49.158 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:49.158 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:49.158 Test: blockdev write read max offset ...passed 00:09:49.417 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:49.417 Test: blockdev writev readv 8 blocks ...passed 00:09:49.417 Test: blockdev writev readv 30 x 1block ...passed 00:09:49.417 Test: blockdev writev readv block ...passed 00:09:49.417 Test: blockdev writev readv size > 128k ...passed 00:09:49.417 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:49.417 Test: blockdev comparev and writev ...[2024-07-25 14:11:18.885427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.885463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.885488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.885505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.885853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.885877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.885899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.885915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.886248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.886273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.886295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.886319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.886659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.417 [2024-07-25 14:11:18.886683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:49.417 [2024-07-25 14:11:18.886705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:49.418 [2024-07-25 14:11:18.886721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:49.418 passed 00:09:49.418 Test: blockdev nvme passthru rw ...passed 00:09:49.418 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:11:18.970325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:49.418 [2024-07-25 14:11:18.970352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:49.418 [2024-07-25 14:11:18.970489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:49.418 [2024-07-25 14:11:18.970511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:49.418 [2024-07-25 14:11:18.970645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:49.418 [2024-07-25 14:11:18.970668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:49.418 [2024-07-25 14:11:18.970806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:49.418 [2024-07-25 14:11:18.970829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:49.418 passed 00:09:49.418 Test: blockdev nvme admin passthru ...passed 00:09:49.418 Test: blockdev copy ...passed 00:09:49.418 00:09:49.418 Run Summary: Type Total Ran Passed Failed Inactive 00:09:49.418 suites 1 1 n/a 0 0 00:09:49.418 tests 23 23 23 0 0 00:09:49.418 asserts 152 152 152 0 n/a 00:09:49.418 00:09:49.418 Elapsed time = 1.054 seconds 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.676 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.676 rmmod nvme_tcp 00:09:49.676 rmmod nvme_fabrics 00:09:49.676 rmmod nvme_keyring 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 855076 ']' 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 855076 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 855076 ']' 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 855076 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855076 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855076' 00:09:49.934 killing process with pid 855076 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 855076 00:09:49.934 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 855076 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:50.193 14:11:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:52.100 00:09:52.100 real 0m6.424s 00:09:52.100 user 0m10.834s 00:09:52.100 sys 0m2.038s 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:52.100 ************************************ 00:09:52.100 END TEST nvmf_bdevio 00:09:52.100 ************************************ 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:52.100 00:09:52.100 real 3m53.210s 00:09:52.100 user 10m1.652s 00:09:52.100 sys 1m8.580s 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:52.100 14:11:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.100 ************************************ 00:09:52.100 END TEST nvmf_target_core 00:09:52.100 ************************************ 00:09:52.360 14:11:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:52.360 14:11:21 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:52.360 14:11:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.360 14:11:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.360 14:11:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:52.360 ************************************ 00:09:52.360 START TEST nvmf_target_extra 00:09:52.360 ************************************ 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:52.360 * Looking for test storage... 00:09:52.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.360 14:11:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:52.361 ************************************ 00:09:52.361 START TEST nvmf_example 00:09:52.361 ************************************ 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:52.361 * Looking for test storage... 00:09:52.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:52.361 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:52.362 14:11:21 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:54.895 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:54.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:54.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:54.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:54.896 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:54.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:54.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:09:54.896 00:09:54.896 --- 10.0.0.2 ping statistics --- 00:09:54.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.896 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:54.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:54.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:09:54.896 00:09:54.896 --- 10.0.0.1 ping statistics --- 00:09:54.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:54.896 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:54.896 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=857339 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 857339 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 857339 ']' 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:54.897 14:11:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:55.836 14:11:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:55.836 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.820 Initializing NVMe Controllers 00:10:05.820 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.820 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.820 Initialization complete. Launching workers. 00:10:05.820 ======================================================== 00:10:05.820 Latency(us) 00:10:05.820 Device Information : IOPS MiB/s Average min max 00:10:05.820 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15595.72 60.92 4103.27 694.13 15662.09 00:10:05.820 ======================================================== 00:10:05.820 Total : 15595.72 60.92 4103.27 694.13 15662.09 00:10:05.820 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.820 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.820 rmmod nvme_tcp 00:10:06.079 rmmod nvme_fabrics 00:10:06.079 rmmod nvme_keyring 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 857339 ']' 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 857339 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 857339 ']' 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 857339 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 857339 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 857339' 00:10:06.079 killing process with pid 857339 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 857339 00:10:06.079 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 857339 00:10:06.338 nvmf threads initialize successfully 00:10:06.338 bdev subsystem init successfully 00:10:06.338 created a nvmf target service 00:10:06.338 create targets's poll groups done 00:10:06.338 all subsystems of target started 00:10:06.338 nvmf target is running 00:10:06.338 all subsystems of target stopped 00:10:06.338 destroy targets's poll groups done 00:10:06.338 destroyed the nvmf target service 00:10:06.338 bdev subsystem finish successfully 00:10:06.338 nvmf threads destroy successfully 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.338 14:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.246 00:10:08.246 real 0m15.922s 00:10:08.246 user 0m44.947s 00:10:08.246 sys 0m3.329s 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.246 ************************************ 00:10:08.246 END TEST nvmf_example 00:10:08.246 ************************************ 00:10:08.246 14:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:10:08.247 14:11:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:08.247 14:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:08.247 14:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:08.247 14:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.247 ************************************ 00:10:08.247 START TEST nvmf_filesystem 00:10:08.247 ************************************ 00:10:08.247 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:08.508 * Looking for test storage... 00:10:08.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:08.508 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:08.509 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:08.509 #define SPDK_CONFIG_H 00:10:08.509 #define SPDK_CONFIG_APPS 1 00:10:08.509 #define SPDK_CONFIG_ARCH native 00:10:08.509 #undef SPDK_CONFIG_ASAN 00:10:08.509 #undef SPDK_CONFIG_AVAHI 00:10:08.509 #undef SPDK_CONFIG_CET 00:10:08.509 #define SPDK_CONFIG_COVERAGE 1 00:10:08.509 #define SPDK_CONFIG_CROSS_PREFIX 00:10:08.509 #undef SPDK_CONFIG_CRYPTO 00:10:08.509 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:08.509 #undef SPDK_CONFIG_CUSTOMOCF 00:10:08.509 #undef SPDK_CONFIG_DAOS 00:10:08.509 #define SPDK_CONFIG_DAOS_DIR 00:10:08.509 #define SPDK_CONFIG_DEBUG 1 00:10:08.509 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:08.509 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:08.509 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:08.509 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:08.509 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:08.509 #undef SPDK_CONFIG_DPDK_UADK 00:10:08.509 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:08.509 #define SPDK_CONFIG_EXAMPLES 1 00:10:08.509 #undef SPDK_CONFIG_FC 00:10:08.509 #define SPDK_CONFIG_FC_PATH 00:10:08.509 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:08.510 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:08.510 #undef SPDK_CONFIG_FUSE 00:10:08.510 #undef SPDK_CONFIG_FUZZER 00:10:08.510 #define SPDK_CONFIG_FUZZER_LIB 00:10:08.510 #undef SPDK_CONFIG_GOLANG 00:10:08.510 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:08.510 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:08.510 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:08.510 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:08.510 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:08.510 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:08.510 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:08.510 #define SPDK_CONFIG_IDXD 1 00:10:08.510 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:08.510 #undef SPDK_CONFIG_IPSEC_MB 00:10:08.510 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:08.510 #define SPDK_CONFIG_ISAL 1 00:10:08.510 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:08.510 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:08.510 #define SPDK_CONFIG_LIBDIR 00:10:08.510 #undef SPDK_CONFIG_LTO 00:10:08.510 #define SPDK_CONFIG_MAX_LCORES 128 00:10:08.510 #define SPDK_CONFIG_NVME_CUSE 1 00:10:08.510 #undef SPDK_CONFIG_OCF 00:10:08.510 #define SPDK_CONFIG_OCF_PATH 00:10:08.510 #define SPDK_CONFIG_OPENSSL_PATH 00:10:08.510 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:08.510 #define SPDK_CONFIG_PGO_DIR 00:10:08.510 #undef SPDK_CONFIG_PGO_USE 00:10:08.510 #define SPDK_CONFIG_PREFIX /usr/local 00:10:08.510 #undef SPDK_CONFIG_RAID5F 00:10:08.510 #undef SPDK_CONFIG_RBD 00:10:08.510 #define SPDK_CONFIG_RDMA 1 00:10:08.510 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:08.510 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:08.510 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:08.510 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:08.510 #define SPDK_CONFIG_SHARED 1 00:10:08.510 #undef SPDK_CONFIG_SMA 00:10:08.510 #define SPDK_CONFIG_TESTS 1 00:10:08.510 #undef SPDK_CONFIG_TSAN 00:10:08.510 #define SPDK_CONFIG_UBLK 1 00:10:08.510 #define SPDK_CONFIG_UBSAN 1 00:10:08.510 #undef SPDK_CONFIG_UNIT_TESTS 00:10:08.510 #undef SPDK_CONFIG_URING 00:10:08.510 #define SPDK_CONFIG_URING_PATH 00:10:08.510 #undef SPDK_CONFIG_URING_ZNS 00:10:08.510 #undef SPDK_CONFIG_USDT 00:10:08.510 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:08.510 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:08.510 #define SPDK_CONFIG_VFIO_USER 1 00:10:08.510 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:08.510 #define SPDK_CONFIG_VHOST 1 00:10:08.510 #define SPDK_CONFIG_VIRTIO 1 00:10:08.510 #undef SPDK_CONFIG_VTUNE 00:10:08.510 #define SPDK_CONFIG_VTUNE_DIR 00:10:08.510 #define SPDK_CONFIG_WERROR 1 00:10:08.510 #define SPDK_CONFIG_WPDK_DIR 00:10:08.510 #undef SPDK_CONFIG_XNVME 00:10:08.510 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:08.510 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:08.511 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 859035 ]] 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 859035 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:08.512 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.aBFRSv 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aBFRSv/tests/target /tmp/spdk.aBFRSv 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=56352231424 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994713088 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5642481664 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30987436032 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376535040 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22409216 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30997000192 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=356352 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:08.513 * Looking for test storage... 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=56352231424 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7857074176 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.513 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.514 14:11:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:08.514 14:11:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.050 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.050 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.050 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.050 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.050 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:10:11.051 00:10:11.051 --- 10.0.0.2 ping statistics --- 00:10:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.051 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:10:11.051 00:10:11.051 --- 10.0.0.1 ping statistics --- 00:10:11.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.051 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.051 ************************************ 00:10:11.051 START TEST nvmf_filesystem_no_in_capsule 00:10:11.051 ************************************ 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=860663 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 860663 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 860663 ']' 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.051 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.051 [2024-07-25 14:11:40.397622] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:10:11.051 [2024-07-25 14:11:40.397695] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.051 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.051 [2024-07-25 14:11:40.467908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.051 [2024-07-25 14:11:40.578866] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.051 [2024-07-25 14:11:40.578926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.051 [2024-07-25 14:11:40.578940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.051 [2024-07-25 14:11:40.578951] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.051 [2024-07-25 14:11:40.578961] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.051 [2024-07-25 14:11:40.579088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.051 [2024-07-25 14:11:40.579115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.051 [2024-07-25 14:11:40.579153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.051 [2024-07-25 14:11:40.579157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:11.310 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 [2024-07-25 14:11:40.735600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 [2024-07-25 14:11:40.922007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:11.311 { 00:10:11.311 "name": "Malloc1", 00:10:11.311 "aliases": [ 00:10:11.311 "b921cde1-582a-499c-b221-050cb139b49d" 00:10:11.311 ], 00:10:11.311 "product_name": "Malloc disk", 00:10:11.311 "block_size": 512, 00:10:11.311 "num_blocks": 1048576, 00:10:11.311 "uuid": "b921cde1-582a-499c-b221-050cb139b49d", 00:10:11.311 "assigned_rate_limits": { 00:10:11.311 "rw_ios_per_sec": 0, 00:10:11.311 "rw_mbytes_per_sec": 0, 00:10:11.311 "r_mbytes_per_sec": 0, 00:10:11.311 "w_mbytes_per_sec": 0 00:10:11.311 }, 00:10:11.311 "claimed": true, 00:10:11.311 "claim_type": "exclusive_write", 00:10:11.311 "zoned": false, 00:10:11.311 "supported_io_types": { 00:10:11.311 "read": true, 00:10:11.311 "write": true, 00:10:11.311 "unmap": true, 00:10:11.311 "flush": true, 00:10:11.311 "reset": true, 00:10:11.311 "nvme_admin": false, 00:10:11.311 "nvme_io": false, 00:10:11.311 "nvme_io_md": false, 00:10:11.311 "write_zeroes": true, 00:10:11.311 "zcopy": true, 00:10:11.311 "get_zone_info": false, 00:10:11.311 "zone_management": false, 00:10:11.311 "zone_append": false, 00:10:11.311 "compare": false, 00:10:11.311 "compare_and_write": false, 00:10:11.311 "abort": true, 00:10:11.311 "seek_hole": false, 00:10:11.311 "seek_data": false, 00:10:11.311 "copy": true, 00:10:11.311 "nvme_iov_md": false 00:10:11.311 }, 00:10:11.311 "memory_domains": [ 00:10:11.311 { 00:10:11.311 "dma_device_id": "system", 00:10:11.311 "dma_device_type": 1 00:10:11.311 }, 00:10:11.311 { 00:10:11.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.311 "dma_device_type": 2 00:10:11.311 } 00:10:11.311 ], 00:10:11.311 "driver_specific": {} 00:10:11.311 } 00:10:11.311 ]' 00:10:11.311 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:11.571 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:11.571 14:11:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:11.571 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:11.571 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:11.571 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:11.571 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:11.571 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:12.199 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:12.199 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:12.199 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:12.199 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:12.199 14:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:14.103 14:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:14.670 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:14.930 14:11:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:15.867 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 ************************************ 00:10:15.868 START TEST filesystem_ext4 00:10:15.868 ************************************ 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:15.868 14:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:15.868 mke2fs 1.46.5 (30-Dec-2021) 00:10:15.868 Discarding device blocks: 0/522240 done 00:10:16.128 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:16.128 Filesystem UUID: 85837578-a0c2-4b23-b4ea-11a01ffd5ea3 00:10:16.128 Superblock backups stored on blocks: 00:10:16.128 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:16.128 00:10:16.128 Allocating group tables: 0/64 done 00:10:16.128 Writing inode tables: 0/64 done 00:10:16.386 Creating journal (8192 blocks): done 00:10:17.213 Writing superblocks and filesystem accounting information: 0/64 done 00:10:17.213 00:10:17.213 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:17.213 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 860663 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:17.472 14:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:17.472 00:10:17.472 real 0m1.611s 00:10:17.472 user 0m0.018s 00:10:17.472 sys 0m0.058s 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:17.472 ************************************ 00:10:17.472 END TEST filesystem_ext4 00:10:17.472 ************************************ 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.472 ************************************ 00:10:17.472 START TEST filesystem_btrfs 00:10:17.472 ************************************ 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:17.472 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:17.732 btrfs-progs v6.6.2 00:10:17.732 See https://btrfs.readthedocs.io for more information. 00:10:17.732 00:10:17.732 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:17.732 NOTE: several default settings have changed in version 5.15, please make sure 00:10:17.732 this does not affect your deployments: 00:10:17.732 - DUP for metadata (-m dup) 00:10:17.732 - enabled no-holes (-O no-holes) 00:10:17.732 - enabled free-space-tree (-R free-space-tree) 00:10:17.732 00:10:17.732 Label: (null) 00:10:17.732 UUID: 664d66ab-da17-40ab-afb3-f08f0dd72e1c 00:10:17.732 Node size: 16384 00:10:17.732 Sector size: 4096 00:10:17.732 Filesystem size: 510.00MiB 00:10:17.732 Block group profiles: 00:10:17.732 Data: single 8.00MiB 00:10:17.732 Metadata: DUP 32.00MiB 00:10:17.732 System: DUP 8.00MiB 00:10:17.732 SSD detected: yes 00:10:17.732 Zoned device: no 00:10:17.732 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:17.732 Runtime features: free-space-tree 00:10:17.732 Checksum: crc32c 00:10:17.732 Number of devices: 1 00:10:17.732 Devices: 00:10:17.732 ID SIZE PATH 00:10:17.732 1 510.00MiB /dev/nvme0n1p1 00:10:17.732 00:10:17.732 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:17.732 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.670 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:18.670 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:18.670 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:18.670 14:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 860663 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:18.670 00:10:18.670 real 0m0.976s 00:10:18.670 user 0m0.024s 00:10:18.670 sys 0m0.108s 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:18.670 ************************************ 00:10:18.670 END TEST filesystem_btrfs 00:10:18.670 ************************************ 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:18.670 ************************************ 00:10:18.670 START TEST filesystem_xfs 00:10:18.670 ************************************ 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:18.670 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:18.671 14:11:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:18.671 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:18.671 = sectsz=512 attr=2, projid32bit=1 00:10:18.671 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:18.671 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:18.671 data = bsize=4096 blocks=130560, imaxpct=25 00:10:18.671 = sunit=0 swidth=0 blks 00:10:18.671 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:18.671 log =internal log bsize=4096 blocks=16384, version=2 00:10:18.671 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:18.671 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:19.609 Discarding blocks...Done. 00:10:19.609 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:19.609 14:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 860663 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:21.516 00:10:21.516 real 0m3.028s 00:10:21.516 user 0m0.021s 00:10:21.516 sys 0m0.055s 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:21.516 ************************************ 00:10:21.516 END TEST filesystem_xfs 00:10:21.516 ************************************ 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:21.516 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 860663 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 860663 ']' 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 860663 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 860663 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 860663' 00:10:21.777 killing process with pid 860663 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 860663 00:10:21.777 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 860663 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:22.345 00:10:22.345 real 0m11.477s 00:10:22.345 user 0m43.862s 00:10:22.345 sys 0m1.778s 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.345 ************************************ 00:10:22.345 END TEST nvmf_filesystem_no_in_capsule 00:10:22.345 ************************************ 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.345 ************************************ 00:10:22.345 START TEST nvmf_filesystem_in_capsule 00:10:22.345 ************************************ 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=862335 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 862335 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 862335 ']' 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:22.345 14:11:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.345 [2024-07-25 14:11:51.917570] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:10:22.345 [2024-07-25 14:11:51.917643] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.345 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.345 [2024-07-25 14:11:51.981865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.602 [2024-07-25 14:11:52.093917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.602 [2024-07-25 14:11:52.093975] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.602 [2024-07-25 14:11:52.093989] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.602 [2024-07-25 14:11:52.094000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.602 [2024-07-25 14:11:52.094010] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.602 [2024-07-25 14:11:52.094149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.602 [2024-07-25 14:11:52.098078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.602 [2024-07-25 14:11:52.098150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.602 [2024-07-25 14:11:52.098154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.602 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.602 [2024-07-25 14:11:52.253658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 [2024-07-25 14:11:52.440100] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:22.860 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:10:22.860 { 00:10:22.860 "name": "Malloc1", 00:10:22.860 "aliases": [ 00:10:22.860 "c99482f8-ae02-458e-9c58-4ed81497c16f" 00:10:22.860 ], 00:10:22.860 "product_name": "Malloc disk", 00:10:22.860 "block_size": 512, 00:10:22.860 "num_blocks": 1048576, 00:10:22.860 "uuid": "c99482f8-ae02-458e-9c58-4ed81497c16f", 00:10:22.860 "assigned_rate_limits": { 00:10:22.860 "rw_ios_per_sec": 0, 00:10:22.860 "rw_mbytes_per_sec": 0, 00:10:22.860 "r_mbytes_per_sec": 0, 00:10:22.860 "w_mbytes_per_sec": 0 00:10:22.860 }, 00:10:22.860 "claimed": true, 00:10:22.860 "claim_type": "exclusive_write", 00:10:22.860 "zoned": false, 00:10:22.860 "supported_io_types": { 00:10:22.860 "read": true, 00:10:22.860 "write": true, 00:10:22.860 "unmap": true, 00:10:22.860 "flush": true, 00:10:22.860 "reset": true, 00:10:22.860 "nvme_admin": false, 00:10:22.860 "nvme_io": false, 00:10:22.860 "nvme_io_md": false, 00:10:22.860 "write_zeroes": true, 00:10:22.860 "zcopy": true, 00:10:22.860 "get_zone_info": false, 00:10:22.860 "zone_management": false, 00:10:22.860 "zone_append": false, 00:10:22.860 "compare": false, 00:10:22.860 "compare_and_write": false, 00:10:22.860 "abort": true, 00:10:22.860 "seek_hole": false, 00:10:22.860 "seek_data": false, 00:10:22.860 "copy": true, 00:10:22.860 "nvme_iov_md": false 00:10:22.860 }, 00:10:22.861 "memory_domains": [ 00:10:22.861 { 00:10:22.861 "dma_device_id": "system", 00:10:22.861 "dma_device_type": 1 00:10:22.861 }, 00:10:22.861 { 00:10:22.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.861 "dma_device_type": 2 00:10:22.861 } 00:10:22.861 ], 00:10:22.861 "driver_specific": {} 00:10:22.861 } 00:10:22.861 ]' 00:10:22.861 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:10:22.861 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:10:22.861 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:10:23.120 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:10:23.120 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:10:23.120 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:10:23.120 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:23.120 14:11:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.687 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:23.687 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:23.687 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.687 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:23.687 14:11:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:25.591 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:25.850 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:26.418 14:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:27.354 ************************************ 00:10:27.354 START TEST filesystem_in_capsule_ext4 00:10:27.354 ************************************ 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:27.354 14:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:27.354 mke2fs 1.46.5 (30-Dec-2021) 00:10:27.612 Discarding device blocks: 0/522240 done 00:10:27.612 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:27.612 Filesystem UUID: 033b787c-0c05-4763-a9fb-7f988e7f5b18 00:10:27.612 Superblock backups stored on blocks: 00:10:27.612 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:27.612 00:10:27.612 Allocating group tables: 0/64 done 00:10:27.612 Writing inode tables: 0/64 done 00:10:27.612 Creating journal (8192 blocks): done 00:10:27.612 Writing superblocks and filesystem accounting information: 0/64 done 00:10:27.612 00:10:27.612 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:27.612 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:27.869 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 862335 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.128 00:10:28.128 real 0m0.582s 00:10:28.128 user 0m0.011s 00:10:28.128 sys 0m0.057s 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:28.128 ************************************ 00:10:28.128 END TEST filesystem_in_capsule_ext4 00:10:28.128 ************************************ 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:28.128 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.129 ************************************ 00:10:28.129 START TEST filesystem_in_capsule_btrfs 00:10:28.129 ************************************ 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:28.129 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:28.388 btrfs-progs v6.6.2 00:10:28.388 See https://btrfs.readthedocs.io for more information. 00:10:28.388 00:10:28.388 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:28.388 NOTE: several default settings have changed in version 5.15, please make sure 00:10:28.388 this does not affect your deployments: 00:10:28.388 - DUP for metadata (-m dup) 00:10:28.388 - enabled no-holes (-O no-holes) 00:10:28.388 - enabled free-space-tree (-R free-space-tree) 00:10:28.388 00:10:28.388 Label: (null) 00:10:28.388 UUID: 4cbca313-9c18-48b4-b9c4-780171450e4a 00:10:28.388 Node size: 16384 00:10:28.388 Sector size: 4096 00:10:28.388 Filesystem size: 510.00MiB 00:10:28.388 Block group profiles: 00:10:28.388 Data: single 8.00MiB 00:10:28.388 Metadata: DUP 32.00MiB 00:10:28.388 System: DUP 8.00MiB 00:10:28.388 SSD detected: yes 00:10:28.388 Zoned device: no 00:10:28.388 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:28.388 Runtime features: free-space-tree 00:10:28.388 Checksum: crc32c 00:10:28.388 Number of devices: 1 00:10:28.388 Devices: 00:10:28.388 ID SIZE PATH 00:10:28.388 1 510.00MiB /dev/nvme0n1p1 00:10:28.388 00:10:28.388 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:28.388 14:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 862335 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:28.649 00:10:28.649 real 0m0.591s 00:10:28.649 user 0m0.019s 00:10:28.649 sys 0m0.111s 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:28.649 ************************************ 00:10:28.649 END TEST filesystem_in_capsule_btrfs 00:10:28.649 ************************************ 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:28.649 ************************************ 00:10:28.649 START TEST filesystem_in_capsule_xfs 00:10:28.649 ************************************ 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:28.649 14:11:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:28.907 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:28.907 = sectsz=512 attr=2, projid32bit=1 00:10:28.907 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:28.907 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:28.907 data = bsize=4096 blocks=130560, imaxpct=25 00:10:28.907 = sunit=0 swidth=0 blks 00:10:28.907 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:28.907 log =internal log bsize=4096 blocks=16384, version=2 00:10:28.907 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:28.907 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:29.848 Discarding blocks...Done. 00:10:29.848 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:29.848 14:11:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 862335 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:32.425 00:10:32.425 real 0m3.345s 00:10:32.425 user 0m0.015s 00:10:32.425 sys 0m0.054s 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:32.425 ************************************ 00:10:32.425 END TEST filesystem_in_capsule_xfs 00:10:32.425 ************************************ 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:32.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 862335 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 862335 ']' 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 862335 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 862335 00:10:32.425 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:32.426 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:32.426 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 862335' 00:10:32.426 killing process with pid 862335 00:10:32.426 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 862335 00:10:32.426 14:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 862335 00:10:32.686 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:32.686 00:10:32.686 real 0m10.460s 00:10:32.686 user 0m39.878s 00:10:32.686 sys 0m1.698s 00:10:32.686 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.686 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.686 ************************************ 00:10:32.686 END TEST nvmf_filesystem_in_capsule 00:10:32.686 ************************************ 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:32.945 rmmod nvme_tcp 00:10:32.945 rmmod nvme_fabrics 00:10:32.945 rmmod nvme_keyring 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:32.945 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.946 14:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:34.853 00:10:34.853 real 0m26.600s 00:10:34.853 user 1m24.676s 00:10:34.853 sys 0m5.192s 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:34.853 ************************************ 00:10:34.853 END TEST nvmf_filesystem 00:10:34.853 ************************************ 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:34.853 14:12:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:35.112 ************************************ 00:10:35.112 START TEST nvmf_target_discovery 00:10:35.112 ************************************ 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:35.112 * Looking for test storage... 00:10:35.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.112 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:35.113 14:12:04 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:37.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:37.645 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:37.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:37.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:37.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:10:37.646 00:10:37.646 --- 10.0.0.2 ping statistics --- 00:10:37.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.646 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:10:37.646 00:10:37.646 --- 10.0.0.1 ping statistics --- 00:10:37.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.646 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=865781 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 865781 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 865781 ']' 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.646 14:12:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.646 [2024-07-25 14:12:06.893569] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:10:37.646 [2024-07-25 14:12:06.893646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.646 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.646 [2024-07-25 14:12:06.959218] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.647 [2024-07-25 14:12:07.069947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.647 [2024-07-25 14:12:07.070011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.647 [2024-07-25 14:12:07.070024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.647 [2024-07-25 14:12:07.070036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.647 [2024-07-25 14:12:07.070065] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.647 [2024-07-25 14:12:07.070153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.647 [2024-07-25 14:12:07.070283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.647 [2024-07-25 14:12:07.070305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.647 [2024-07-25 14:12:07.070308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 [2024-07-25 14:12:07.228627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 Null1 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 [2024-07-25 14:12:07.268957] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 Null2 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.647 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 Null3 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.906 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 Null4 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:37.907 00:10:37.907 Discovery Log Number of Records 6, Generation counter 6 00:10:37.907 =====Discovery Log Entry 0====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: current discovery subsystem 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4420 00:10:37.907 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: explicit discovery connections, duplicate discovery information 00:10:37.907 sectype: none 00:10:37.907 =====Discovery Log Entry 1====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: nvme subsystem 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4420 00:10:37.907 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: none 00:10:37.907 sectype: none 00:10:37.907 =====Discovery Log Entry 2====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: nvme subsystem 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4420 00:10:37.907 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: none 00:10:37.907 sectype: none 00:10:37.907 =====Discovery Log Entry 3====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: nvme subsystem 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4420 00:10:37.907 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: none 00:10:37.907 sectype: none 00:10:37.907 =====Discovery Log Entry 4====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: nvme subsystem 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4420 00:10:37.907 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: none 00:10:37.907 sectype: none 00:10:37.907 =====Discovery Log Entry 5====== 00:10:37.907 trtype: tcp 00:10:37.907 adrfam: ipv4 00:10:37.907 subtype: discovery subsystem referral 00:10:37.907 treq: not required 00:10:37.907 portid: 0 00:10:37.907 trsvcid: 4430 00:10:37.907 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:37.907 traddr: 10.0.0.2 00:10:37.907 eflags: none 00:10:37.907 sectype: none 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:37.907 Perform nvmf subsystem discovery via RPC 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.907 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.907 [ 00:10:37.907 { 00:10:37.907 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:37.907 "subtype": "Discovery", 00:10:37.907 "listen_addresses": [ 00:10:37.907 { 00:10:37.907 "trtype": "TCP", 00:10:37.907 "adrfam": "IPv4", 00:10:37.907 "traddr": "10.0.0.2", 00:10:37.907 "trsvcid": "4420" 00:10:37.907 } 00:10:37.907 ], 00:10:37.907 "allow_any_host": true, 00:10:37.907 "hosts": [] 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:37.907 "subtype": "NVMe", 00:10:37.907 "listen_addresses": [ 00:10:37.907 { 00:10:37.907 "trtype": "TCP", 00:10:37.907 "adrfam": "IPv4", 00:10:37.907 "traddr": "10.0.0.2", 00:10:37.907 "trsvcid": "4420" 00:10:37.907 } 00:10:37.907 ], 00:10:37.907 "allow_any_host": true, 00:10:37.907 "hosts": [], 00:10:37.907 "serial_number": "SPDK00000000000001", 00:10:37.907 "model_number": "SPDK bdev Controller", 00:10:37.907 "max_namespaces": 32, 00:10:37.907 "min_cntlid": 1, 00:10:37.907 "max_cntlid": 65519, 00:10:37.907 "namespaces": [ 00:10:37.907 { 00:10:37.907 "nsid": 1, 00:10:37.907 "bdev_name": "Null1", 00:10:37.907 "name": "Null1", 00:10:37.907 "nguid": "2D6E1F7941734F3C91DE671DB744AC6B", 00:10:37.907 "uuid": "2d6e1f79-4173-4f3c-91de-671db744ac6b" 00:10:37.907 } 00:10:37.907 ] 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:37.907 "subtype": "NVMe", 00:10:37.907 "listen_addresses": [ 00:10:37.907 { 00:10:37.907 "trtype": "TCP", 00:10:37.907 "adrfam": "IPv4", 00:10:37.907 "traddr": "10.0.0.2", 00:10:37.907 "trsvcid": "4420" 00:10:37.907 } 00:10:37.907 ], 00:10:37.907 "allow_any_host": true, 00:10:37.907 "hosts": [], 00:10:37.907 "serial_number": "SPDK00000000000002", 00:10:37.907 "model_number": "SPDK bdev Controller", 00:10:37.907 "max_namespaces": 32, 00:10:37.907 "min_cntlid": 1, 00:10:37.907 "max_cntlid": 65519, 00:10:37.907 "namespaces": [ 00:10:37.907 { 00:10:37.907 "nsid": 1, 00:10:37.907 "bdev_name": "Null2", 00:10:37.907 "name": "Null2", 00:10:37.907 "nguid": "94985265BEF745D6AADAA772B3E9B0C8", 00:10:37.907 "uuid": "94985265-bef7-45d6-aada-a772b3e9b0c8" 00:10:37.907 } 00:10:37.907 ] 00:10:37.907 }, 00:10:37.907 { 00:10:37.907 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:37.907 "subtype": "NVMe", 00:10:37.907 "listen_addresses": [ 00:10:37.907 { 00:10:37.907 "trtype": "TCP", 00:10:37.907 "adrfam": "IPv4", 00:10:37.907 "traddr": "10.0.0.2", 00:10:37.907 "trsvcid": "4420" 00:10:37.907 } 00:10:37.907 ], 00:10:37.907 "allow_any_host": true, 00:10:37.907 "hosts": [], 00:10:37.907 "serial_number": "SPDK00000000000003", 00:10:37.907 "model_number": "SPDK bdev Controller", 00:10:37.907 "max_namespaces": 32, 00:10:37.907 "min_cntlid": 1, 00:10:37.907 "max_cntlid": 65519, 00:10:37.907 "namespaces": [ 00:10:37.907 { 00:10:37.907 "nsid": 1, 00:10:37.907 "bdev_name": "Null3", 00:10:37.907 "name": "Null3", 00:10:37.907 "nguid": "F21C12D14B96487A942FE9E48D9AD5F9", 00:10:37.907 "uuid": "f21c12d1-4b96-487a-942f-e9e48d9ad5f9" 00:10:37.907 } 00:10:37.907 ] 00:10:37.907 }, 00:10:37.907 { 00:10:37.908 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:37.908 "subtype": "NVMe", 00:10:37.908 "listen_addresses": [ 00:10:37.908 { 00:10:37.908 "trtype": "TCP", 00:10:37.908 "adrfam": "IPv4", 00:10:37.908 "traddr": "10.0.0.2", 00:10:37.908 "trsvcid": "4420" 00:10:37.908 } 00:10:37.908 ], 00:10:37.908 "allow_any_host": true, 00:10:37.908 "hosts": [], 00:10:37.908 "serial_number": "SPDK00000000000004", 00:10:37.908 "model_number": "SPDK bdev Controller", 00:10:37.908 "max_namespaces": 32, 00:10:37.908 "min_cntlid": 1, 00:10:37.908 "max_cntlid": 65519, 00:10:37.908 "namespaces": [ 00:10:37.908 { 00:10:37.908 "nsid": 1, 00:10:37.908 "bdev_name": "Null4", 00:10:37.908 "name": "Null4", 00:10:37.908 "nguid": "500EA90BCABF42E9A8C34C11B44ECED6", 00:10:37.908 "uuid": "500ea90b-cabf-42e9-a8c3-4c11b44eced6" 00:10:37.908 } 00:10:37.908 ] 00:10:37.908 } 00:10:37.908 ] 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:37.908 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:38.166 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:38.167 rmmod nvme_tcp 00:10:38.167 rmmod nvme_fabrics 00:10:38.167 rmmod nvme_keyring 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 865781 ']' 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 865781 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 865781 ']' 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 865781 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865781 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865781' 00:10:38.167 killing process with pid 865781 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 865781 00:10:38.167 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 865781 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.425 14:12:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:40.963 00:10:40.963 real 0m5.504s 00:10:40.963 user 0m4.373s 00:10:40.963 sys 0m1.867s 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.963 ************************************ 00:10:40.963 END TEST nvmf_target_discovery 00:10:40.963 ************************************ 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:40.963 ************************************ 00:10:40.963 START TEST nvmf_referrals 00:10:40.963 ************************************ 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:40.963 * Looking for test storage... 00:10:40.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.963 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:40.964 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:40.964 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:40.964 14:12:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:42.866 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:42.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:42.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:42.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:42.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:42.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:42.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:42.867 00:10:42.867 --- 10.0.0.2 ping statistics --- 00:10:42.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.867 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:42.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:42.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:10:42.867 00:10:42.867 --- 10.0.0.1 ping statistics --- 00:10:42.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:42.867 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:42.867 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=868377 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 868377 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 868377 ']' 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:42.868 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:42.868 [2024-07-25 14:12:12.447759] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:10:42.868 [2024-07-25 14:12:12.447837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:42.868 EAL: No free 2048 kB hugepages reported on node 1 00:10:42.868 [2024-07-25 14:12:12.511297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:43.125 [2024-07-25 14:12:12.613526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:43.125 [2024-07-25 14:12:12.613582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:43.125 [2024-07-25 14:12:12.613611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:43.125 [2024-07-25 14:12:12.613622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:43.125 [2024-07-25 14:12:12.613632] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:43.125 [2024-07-25 14:12:12.613721] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.125 [2024-07-25 14:12:12.613786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:43.125 [2024-07-25 14:12:12.613853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:43.125 [2024-07-25 14:12:12.613856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.125 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.125 [2024-07-25 14:12:12.772315] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.384 [2024-07-25 14:12:12.784573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.384 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.385 14:12:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.643 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:43.644 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:43.903 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:43.903 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:43.903 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:43.904 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.161 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.419 14:12:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.419 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:44.677 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:44.678 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:44.678 rmmod nvme_tcp 00:10:44.937 rmmod nvme_fabrics 00:10:44.937 rmmod nvme_keyring 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 868377 ']' 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 868377 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 868377 ']' 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 868377 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 868377 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 868377' 00:10:44.937 killing process with pid 868377 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 868377 00:10:44.937 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 868377 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.197 14:12:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:47.101 00:10:47.101 real 0m6.640s 00:10:47.101 user 0m9.474s 00:10:47.101 sys 0m2.111s 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.101 ************************************ 00:10:47.101 END TEST nvmf_referrals 00:10:47.101 ************************************ 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:47.101 14:12:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:47.360 ************************************ 00:10:47.360 START TEST nvmf_connect_disconnect 00:10:47.360 ************************************ 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:47.360 * Looking for test storage... 00:10:47.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:47.360 14:12:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:49.895 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:49.895 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:49.895 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:49.895 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.895 14:12:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:10:49.895 00:10:49.895 --- 10.0.0.2 ping statistics --- 00:10:49.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.895 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:10:49.895 00:10:49.895 --- 10.0.0.1 ping statistics --- 00:10:49.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.895 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=870670 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 870670 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 870670 ']' 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 [2024-07-25 14:12:19.146808] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:10:49.895 [2024-07-25 14:12:19.146892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.895 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.895 [2024-07-25 14:12:19.207270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.895 [2024-07-25 14:12:19.308406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.895 [2024-07-25 14:12:19.308460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.895 [2024-07-25 14:12:19.308488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.895 [2024-07-25 14:12:19.308500] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.895 [2024-07-25 14:12:19.308509] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.895 [2024-07-25 14:12:19.308589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.895 [2024-07-25 14:12:19.308654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.895 [2024-07-25 14:12:19.308760] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.895 [2024-07-25 14:12:19.308768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.895 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 [2024-07-25 14:12:19.475582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:49.896 [2024-07-25 14:12:19.532909] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:49.896 14:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:53.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.809 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:04.094 rmmod nvme_tcp 00:11:04.094 rmmod nvme_fabrics 00:11:04.094 rmmod nvme_keyring 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 870670 ']' 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 870670 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 870670 ']' 00:11:04.094 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 870670 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 870670 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 870670' 00:11:04.095 killing process with pid 870670 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 870670 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 870670 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.095 14:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.000 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:06.000 00:11:06.000 real 0m18.813s 00:11:06.000 user 0m56.063s 00:11:06.000 sys 0m3.451s 00:11:06.000 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.000 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:06.000 ************************************ 00:11:06.000 END TEST nvmf_connect_disconnect 00:11:06.000 ************************************ 00:11:06.000 14:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:06.001 14:12:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.001 14:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.001 14:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.001 14:12:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:06.001 ************************************ 00:11:06.001 START TEST nvmf_multitarget 00:11:06.001 ************************************ 00:11:06.001 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.261 * Looking for test storage... 00:11:06.261 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.261 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.262 14:12:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.162 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.162 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.162 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.162 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.162 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:11:08.163 00:11:08.163 --- 10.0.0.2 ping statistics --- 00:11:08.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.163 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:11:08.163 00:11:08.163 --- 10.0.0.1 ping statistics --- 00:11:08.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.163 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=874312 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 874312 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 874312 ']' 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.163 14:12:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.421 [2024-07-25 14:12:37.819437] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:11:08.422 [2024-07-25 14:12:37.819524] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.422 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.422 [2024-07-25 14:12:37.887356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.422 [2024-07-25 14:12:38.002668] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.422 [2024-07-25 14:12:38.002716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.422 [2024-07-25 14:12:38.002730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.422 [2024-07-25 14:12:38.002741] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.422 [2024-07-25 14:12:38.002751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.422 [2024-07-25 14:12:38.002895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.422 [2024-07-25 14:12:38.002960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.422 [2024-07-25 14:12:38.003015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.422 [2024-07-25 14:12:38.003012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:08.680 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:08.938 "nvmf_tgt_1" 00:11:08.938 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:08.938 "nvmf_tgt_2" 00:11:08.938 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:08.938 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:09.198 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:09.198 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:09.198 true 00:11:09.198 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:09.198 true 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:09.459 14:12:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:09.459 rmmod nvme_tcp 00:11:09.459 rmmod nvme_fabrics 00:11:09.459 rmmod nvme_keyring 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 874312 ']' 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 874312 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 874312 ']' 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 874312 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 874312 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 874312' 00:11:09.459 killing process with pid 874312 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 874312 00:11:09.459 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 874312 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.719 14:12:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.255 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:12.255 00:11:12.255 real 0m5.729s 00:11:12.255 user 0m6.567s 00:11:12.255 sys 0m1.876s 00:11:12.255 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:12.255 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:12.255 ************************************ 00:11:12.255 END TEST nvmf_multitarget 00:11:12.255 ************************************ 00:11:12.255 14:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:12.255 14:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:12.256 ************************************ 00:11:12.256 START TEST nvmf_rpc 00:11:12.256 ************************************ 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:12.256 * Looking for test storage... 00:11:12.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:12.256 14:12:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:14.164 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:14.165 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:14.165 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:14.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:14.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:14.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:11:14.165 00:11:14.165 --- 10.0.0.2 ping statistics --- 00:11:14.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.165 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:14.165 00:11:14.165 --- 10.0.0.1 ping statistics --- 00:11:14.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.165 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=876412 00:11:14.165 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 876412 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 876412 ']' 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:14.166 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.166 [2024-07-25 14:12:43.697534] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:11:14.166 [2024-07-25 14:12:43.697619] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.166 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.166 [2024-07-25 14:12:43.761321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:14.424 [2024-07-25 14:12:43.868910] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.424 [2024-07-25 14:12:43.868968] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.424 [2024-07-25 14:12:43.868991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.424 [2024-07-25 14:12:43.869002] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.424 [2024-07-25 14:12:43.869011] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.424 [2024-07-25 14:12:43.869110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.424 [2024-07-25 14:12:43.869150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.424 [2024-07-25 14:12:43.869250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.424 [2024-07-25 14:12:43.869247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.424 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:14.424 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:14.424 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:14.424 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:14.424 14:12:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:14.424 "tick_rate": 2700000000, 00:11:14.424 "poll_groups": [ 00:11:14.424 { 00:11:14.424 "name": "nvmf_tgt_poll_group_000", 00:11:14.424 "admin_qpairs": 0, 00:11:14.424 "io_qpairs": 0, 00:11:14.424 "current_admin_qpairs": 0, 00:11:14.424 "current_io_qpairs": 0, 00:11:14.424 "pending_bdev_io": 0, 00:11:14.424 "completed_nvme_io": 0, 00:11:14.424 "transports": [] 00:11:14.424 }, 00:11:14.424 { 00:11:14.424 "name": "nvmf_tgt_poll_group_001", 00:11:14.424 "admin_qpairs": 0, 00:11:14.424 "io_qpairs": 0, 00:11:14.424 "current_admin_qpairs": 0, 00:11:14.424 "current_io_qpairs": 0, 00:11:14.424 "pending_bdev_io": 0, 00:11:14.424 "completed_nvme_io": 0, 00:11:14.424 "transports": [] 00:11:14.424 }, 00:11:14.424 { 00:11:14.424 "name": "nvmf_tgt_poll_group_002", 00:11:14.424 "admin_qpairs": 0, 00:11:14.424 "io_qpairs": 0, 00:11:14.424 "current_admin_qpairs": 0, 00:11:14.424 "current_io_qpairs": 0, 00:11:14.424 "pending_bdev_io": 0, 00:11:14.424 "completed_nvme_io": 0, 00:11:14.424 "transports": [] 00:11:14.424 }, 00:11:14.424 { 00:11:14.424 "name": "nvmf_tgt_poll_group_003", 00:11:14.424 "admin_qpairs": 0, 00:11:14.424 "io_qpairs": 0, 00:11:14.424 "current_admin_qpairs": 0, 00:11:14.424 "current_io_qpairs": 0, 00:11:14.424 "pending_bdev_io": 0, 00:11:14.424 "completed_nvme_io": 0, 00:11:14.424 "transports": [] 00:11:14.424 } 00:11:14.424 ] 00:11:14.424 }' 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:14.424 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.683 [2024-07-25 14:12:44.119932] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:14.683 "tick_rate": 2700000000, 00:11:14.683 "poll_groups": [ 00:11:14.683 { 00:11:14.683 "name": "nvmf_tgt_poll_group_000", 00:11:14.683 "admin_qpairs": 0, 00:11:14.683 "io_qpairs": 0, 00:11:14.683 "current_admin_qpairs": 0, 00:11:14.683 "current_io_qpairs": 0, 00:11:14.683 "pending_bdev_io": 0, 00:11:14.683 "completed_nvme_io": 0, 00:11:14.683 "transports": [ 00:11:14.683 { 00:11:14.683 "trtype": "TCP" 00:11:14.683 } 00:11:14.683 ] 00:11:14.683 }, 00:11:14.683 { 00:11:14.683 "name": "nvmf_tgt_poll_group_001", 00:11:14.683 "admin_qpairs": 0, 00:11:14.683 "io_qpairs": 0, 00:11:14.683 "current_admin_qpairs": 0, 00:11:14.683 "current_io_qpairs": 0, 00:11:14.683 "pending_bdev_io": 0, 00:11:14.683 "completed_nvme_io": 0, 00:11:14.683 "transports": [ 00:11:14.683 { 00:11:14.683 "trtype": "TCP" 00:11:14.683 } 00:11:14.683 ] 00:11:14.683 }, 00:11:14.683 { 00:11:14.683 "name": "nvmf_tgt_poll_group_002", 00:11:14.683 "admin_qpairs": 0, 00:11:14.683 "io_qpairs": 0, 00:11:14.683 "current_admin_qpairs": 0, 00:11:14.683 "current_io_qpairs": 0, 00:11:14.683 "pending_bdev_io": 0, 00:11:14.683 "completed_nvme_io": 0, 00:11:14.683 "transports": [ 00:11:14.683 { 00:11:14.683 "trtype": "TCP" 00:11:14.683 } 00:11:14.683 ] 00:11:14.683 }, 00:11:14.683 { 00:11:14.683 "name": "nvmf_tgt_poll_group_003", 00:11:14.683 "admin_qpairs": 0, 00:11:14.683 "io_qpairs": 0, 00:11:14.683 "current_admin_qpairs": 0, 00:11:14.683 "current_io_qpairs": 0, 00:11:14.683 "pending_bdev_io": 0, 00:11:14.683 "completed_nvme_io": 0, 00:11:14.683 "transports": [ 00:11:14.683 { 00:11:14.683 "trtype": "TCP" 00:11:14.683 } 00:11:14.683 ] 00:11:14.683 } 00:11:14.683 ] 00:11:14.683 }' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.683 Malloc1 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.683 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.684 [2024-07-25 14:12:44.272652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:14.684 [2024-07-25 14:12:44.295037] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:14.684 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:14.684 could not add new controller: failed to write to nvme-fabrics device 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.684 14:12:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.620 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.620 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:15.620 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.620 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:15.620 14:12:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.528 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.528 [2024-07-25 14:12:47.127253] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:17.528 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:17.528 could not add new controller: failed to write to nvme-fabrics device 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.528 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.498 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:18.498 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.498 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:18.498 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:18.498 14:12:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:20.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:20.400 14:12:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:20.400 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.401 [2024-07-25 14:12:50.034116] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.401 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.660 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.660 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:21.228 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:21.228 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.228 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:21.228 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:21.228 14:12:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:23.132 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:23.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 [2024-07-25 14:12:52.839903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.391 14:12:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:23.958 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:23.958 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:23.958 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:23.958 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:23.958 14:12:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:26.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 [2024-07-25 14:12:55.654604] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.494 14:12:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:26.753 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:26.753 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:26.753 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:26.753 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:26.753 14:12:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:29.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 [2024-07-25 14:12:58.478019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.295 14:12:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.555 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.555 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.555 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.555 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:29.555 14:12:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.090 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.091 [2024-07-25 14:13:01.294319] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.091 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.348 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.348 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:32.348 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.348 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:32.348 14:13:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:34.884 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:34.884 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:34.884 14:13:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 [2024-07-25 14:13:04.110213] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.884 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 [2024-07-25 14:13:04.158240] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 [2024-07-25 14:13:04.206442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 [2024-07-25 14:13:04.254604] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.885 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 [2024-07-25 14:13:04.302768] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.886 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:34.886 "tick_rate": 2700000000, 00:11:34.886 "poll_groups": [ 00:11:34.886 { 00:11:34.886 "name": "nvmf_tgt_poll_group_000", 00:11:34.886 "admin_qpairs": 2, 00:11:34.886 "io_qpairs": 84, 00:11:34.886 "current_admin_qpairs": 0, 00:11:34.886 "current_io_qpairs": 0, 00:11:34.886 "pending_bdev_io": 0, 00:11:34.886 "completed_nvme_io": 145, 00:11:34.886 "transports": [ 00:11:34.886 { 00:11:34.886 "trtype": "TCP" 00:11:34.886 } 00:11:34.886 ] 00:11:34.886 }, 00:11:34.886 { 00:11:34.886 "name": "nvmf_tgt_poll_group_001", 00:11:34.886 "admin_qpairs": 2, 00:11:34.886 "io_qpairs": 84, 00:11:34.886 "current_admin_qpairs": 0, 00:11:34.886 "current_io_qpairs": 0, 00:11:34.886 "pending_bdev_io": 0, 00:11:34.886 "completed_nvme_io": 243, 00:11:34.886 "transports": [ 00:11:34.886 { 00:11:34.886 "trtype": "TCP" 00:11:34.886 } 00:11:34.886 ] 00:11:34.886 }, 00:11:34.886 { 00:11:34.886 "name": "nvmf_tgt_poll_group_002", 00:11:34.886 "admin_qpairs": 1, 00:11:34.886 "io_qpairs": 84, 00:11:34.886 "current_admin_qpairs": 0, 00:11:34.886 "current_io_qpairs": 0, 00:11:34.886 "pending_bdev_io": 0, 00:11:34.886 "completed_nvme_io": 103, 00:11:34.886 "transports": [ 00:11:34.886 { 00:11:34.886 "trtype": "TCP" 00:11:34.886 } 00:11:34.886 ] 00:11:34.886 }, 00:11:34.886 { 00:11:34.886 "name": "nvmf_tgt_poll_group_003", 00:11:34.886 "admin_qpairs": 2, 00:11:34.886 "io_qpairs": 84, 00:11:34.886 "current_admin_qpairs": 0, 00:11:34.886 "current_io_qpairs": 0, 00:11:34.886 "pending_bdev_io": 0, 00:11:34.886 "completed_nvme_io": 195, 00:11:34.886 "transports": [ 00:11:34.886 { 00:11:34.886 "trtype": "TCP" 00:11:34.886 } 00:11:34.886 ] 00:11:34.886 } 00:11:34.886 ] 00:11:34.886 }' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:34.887 rmmod nvme_tcp 00:11:34.887 rmmod nvme_fabrics 00:11:34.887 rmmod nvme_keyring 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 876412 ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 876412 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 876412 ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 876412 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 876412 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 876412' 00:11:34.887 killing process with pid 876412 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 876412 00:11:34.887 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 876412 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:35.146 14:13:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:37.683 00:11:37.683 real 0m25.409s 00:11:37.683 user 1m22.682s 00:11:37.683 sys 0m4.116s 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 ************************************ 00:11:37.683 END TEST nvmf_rpc 00:11:37.683 ************************************ 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:37.683 ************************************ 00:11:37.683 START TEST nvmf_invalid 00:11:37.683 ************************************ 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:37.683 * Looking for test storage... 00:11:37.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:37.683 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:37.684 14:13:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:39.582 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.582 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:39.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:39.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:39.583 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:39.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:39.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:11:39.583 00:11:39.583 --- 10.0.0.2 ping statistics --- 00:11:39.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.583 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:39.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:39.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:11:39.583 00:11:39.583 --- 10.0.0.1 ping statistics --- 00:11:39.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:39.583 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=880907 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 880907 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 880907 ']' 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:39.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:39.583 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:39.843 [2024-07-25 14:13:09.249299] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:11:39.843 [2024-07-25 14:13:09.249368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:39.843 EAL: No free 2048 kB hugepages reported on node 1 00:11:39.843 [2024-07-25 14:13:09.313701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:39.843 [2024-07-25 14:13:09.419630] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:39.843 [2024-07-25 14:13:09.419682] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:39.843 [2024-07-25 14:13:09.419696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:39.843 [2024-07-25 14:13:09.419707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:39.843 [2024-07-25 14:13:09.419716] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:39.843 [2024-07-25 14:13:09.419800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:39.843 [2024-07-25 14:13:09.419908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:39.843 [2024-07-25 14:13:09.419996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:39.843 [2024-07-25 14:13:09.419998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:40.101 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode4039 00:11:40.358 [2024-07-25 14:13:09.863463] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:40.358 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:40.358 { 00:11:40.358 "nqn": "nqn.2016-06.io.spdk:cnode4039", 00:11:40.358 "tgt_name": "foobar", 00:11:40.358 "method": "nvmf_create_subsystem", 00:11:40.358 "req_id": 1 00:11:40.358 } 00:11:40.358 Got JSON-RPC error response 00:11:40.358 response: 00:11:40.358 { 00:11:40.358 "code": -32603, 00:11:40.358 "message": "Unable to find target foobar" 00:11:40.358 }' 00:11:40.358 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:40.358 { 00:11:40.358 "nqn": "nqn.2016-06.io.spdk:cnode4039", 00:11:40.358 "tgt_name": "foobar", 00:11:40.358 "method": "nvmf_create_subsystem", 00:11:40.358 "req_id": 1 00:11:40.358 } 00:11:40.358 Got JSON-RPC error response 00:11:40.358 response: 00:11:40.358 { 00:11:40.358 "code": -32603, 00:11:40.358 "message": "Unable to find target foobar" 00:11:40.358 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:40.358 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:40.358 14:13:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11285 00:11:40.615 [2024-07-25 14:13:10.124395] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11285: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:40.615 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:40.615 { 00:11:40.615 "nqn": "nqn.2016-06.io.spdk:cnode11285", 00:11:40.615 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:40.615 "method": "nvmf_create_subsystem", 00:11:40.615 "req_id": 1 00:11:40.616 } 00:11:40.616 Got JSON-RPC error response 00:11:40.616 response: 00:11:40.616 { 00:11:40.616 "code": -32602, 00:11:40.616 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:40.616 }' 00:11:40.616 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:40.616 { 00:11:40.616 "nqn": "nqn.2016-06.io.spdk:cnode11285", 00:11:40.616 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:40.616 "method": "nvmf_create_subsystem", 00:11:40.616 "req_id": 1 00:11:40.616 } 00:11:40.616 Got JSON-RPC error response 00:11:40.616 response: 00:11:40.616 { 00:11:40.616 "code": -32602, 00:11:40.616 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:40.616 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:40.616 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:40.616 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24214 00:11:40.875 [2024-07-25 14:13:10.377217] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24214: invalid model number 'SPDK_Controller' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:40.875 { 00:11:40.875 "nqn": "nqn.2016-06.io.spdk:cnode24214", 00:11:40.875 "model_number": "SPDK_Controller\u001f", 00:11:40.875 "method": "nvmf_create_subsystem", 00:11:40.875 "req_id": 1 00:11:40.875 } 00:11:40.875 Got JSON-RPC error response 00:11:40.875 response: 00:11:40.875 { 00:11:40.875 "code": -32602, 00:11:40.875 "message": "Invalid MN SPDK_Controller\u001f" 00:11:40.875 }' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:40.875 { 00:11:40.875 "nqn": "nqn.2016-06.io.spdk:cnode24214", 00:11:40.875 "model_number": "SPDK_Controller\u001f", 00:11:40.875 "method": "nvmf_create_subsystem", 00:11:40.875 "req_id": 1 00:11:40.875 } 00:11:40.875 Got JSON-RPC error response 00:11:40.875 response: 00:11:40.875 { 00:11:40.875 "code": -32602, 00:11:40.875 "message": "Invalid MN SPDK_Controller\u001f" 00:11:40.875 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.875 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ d == \- ]] 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'd7kkEFVz1D>?/]+.ByBY`' 00:11:40.876 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'd7kkEFVz1D>?/]+.ByBY`' nqn.2016-06.io.spdk:cnode18790 00:11:41.157 [2024-07-25 14:13:10.702321] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18790: invalid serial number 'd7kkEFVz1D>?/]+.ByBY`' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:41.157 { 00:11:41.157 "nqn": "nqn.2016-06.io.spdk:cnode18790", 00:11:41.157 "serial_number": "d7kkEFVz1D>?/]+.ByBY`", 00:11:41.157 "method": "nvmf_create_subsystem", 00:11:41.157 "req_id": 1 00:11:41.157 } 00:11:41.157 Got JSON-RPC error response 00:11:41.157 response: 00:11:41.157 { 00:11:41.157 "code": -32602, 00:11:41.157 "message": "Invalid SN d7kkEFVz1D>?/]+.ByBY`" 00:11:41.157 }' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:41.157 { 00:11:41.157 "nqn": "nqn.2016-06.io.spdk:cnode18790", 00:11:41.157 "serial_number": "d7kkEFVz1D>?/]+.ByBY`", 00:11:41.157 "method": "nvmf_create_subsystem", 00:11:41.157 "req_id": 1 00:11:41.157 } 00:11:41.157 Got JSON-RPC error response 00:11:41.157 response: 00:11:41.157 { 00:11:41.157 "code": -32602, 00:11:41.157 "message": "Invalid SN d7kkEFVz1D>?/]+.ByBY`" 00:11:41.157 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.157 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.158 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.422 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"5>dKu?A]WbGDH6+CMY&8h27M^'\''5!^{EZ>jTB3=/R' 00:11:41.423 14:13:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '"5>dKu?A]WbGDH6+CMY&8h27M^'\''5!^{EZ>jTB3=/R' nqn.2016-06.io.spdk:cnode24381 00:11:41.681 [2024-07-25 14:13:11.091552] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24381: invalid model number '"5>dKu?A]WbGDH6+CMY&8h27M^'5!^{EZ>jTB3=/R' 00:11:41.681 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:41.681 { 00:11:41.681 "nqn": "nqn.2016-06.io.spdk:cnode24381", 00:11:41.681 "model_number": "\"5>dKu?A]WbGDH6+CMY&8h27M^'\''5!^{EZ>jTB3=/R", 00:11:41.681 "method": "nvmf_create_subsystem", 00:11:41.681 "req_id": 1 00:11:41.681 } 00:11:41.681 Got JSON-RPC error response 00:11:41.681 response: 00:11:41.681 { 00:11:41.681 "code": -32602, 00:11:41.681 "message": "Invalid MN \"5>dKu?A]WbGDH6+CMY&8h27M^'\''5!^{EZ>jTB3=/R" 00:11:41.681 }' 00:11:41.681 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:41.681 { 00:11:41.681 "nqn": "nqn.2016-06.io.spdk:cnode24381", 00:11:41.681 "model_number": "\"5>dKu?A]WbGDH6+CMY&8h27M^'5!^{EZ>jTB3=/R", 00:11:41.681 "method": "nvmf_create_subsystem", 00:11:41.681 "req_id": 1 00:11:41.681 } 00:11:41.681 Got JSON-RPC error response 00:11:41.681 response: 00:11:41.681 { 00:11:41.681 "code": -32602, 00:11:41.681 "message": "Invalid MN \"5>dKu?A]WbGDH6+CMY&8h27M^'5!^{EZ>jTB3=/R" 00:11:41.681 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:41.681 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:41.939 [2024-07-25 14:13:11.344478] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.939 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:42.196 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:42.196 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:42.196 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:42.196 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:42.196 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:42.196 [2024-07-25 14:13:11.846113] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:42.455 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:42.455 { 00:11:42.455 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:42.455 "listen_address": { 00:11:42.455 "trtype": "tcp", 00:11:42.455 "traddr": "", 00:11:42.455 "trsvcid": "4421" 00:11:42.455 }, 00:11:42.455 "method": "nvmf_subsystem_remove_listener", 00:11:42.455 "req_id": 1 00:11:42.455 } 00:11:42.455 Got JSON-RPC error response 00:11:42.455 response: 00:11:42.455 { 00:11:42.455 "code": -32602, 00:11:42.455 "message": "Invalid parameters" 00:11:42.455 }' 00:11:42.455 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:42.455 { 00:11:42.455 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:42.455 "listen_address": { 00:11:42.455 "trtype": "tcp", 00:11:42.455 "traddr": "", 00:11:42.455 "trsvcid": "4421" 00:11:42.455 }, 00:11:42.455 "method": "nvmf_subsystem_remove_listener", 00:11:42.455 "req_id": 1 00:11:42.455 } 00:11:42.455 Got JSON-RPC error response 00:11:42.455 response: 00:11:42.455 { 00:11:42.455 "code": -32602, 00:11:42.455 "message": "Invalid parameters" 00:11:42.455 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:42.455 14:13:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11573 -i 0 00:11:42.455 [2024-07-25 14:13:12.090863] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11573: invalid cntlid range [0-65519] 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:42.714 { 00:11:42.714 "nqn": "nqn.2016-06.io.spdk:cnode11573", 00:11:42.714 "min_cntlid": 0, 00:11:42.714 "method": "nvmf_create_subsystem", 00:11:42.714 "req_id": 1 00:11:42.714 } 00:11:42.714 Got JSON-RPC error response 00:11:42.714 response: 00:11:42.714 { 00:11:42.714 "code": -32602, 00:11:42.714 "message": "Invalid cntlid range [0-65519]" 00:11:42.714 }' 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:42.714 { 00:11:42.714 "nqn": "nqn.2016-06.io.spdk:cnode11573", 00:11:42.714 "min_cntlid": 0, 00:11:42.714 "method": "nvmf_create_subsystem", 00:11:42.714 "req_id": 1 00:11:42.714 } 00:11:42.714 Got JSON-RPC error response 00:11:42.714 response: 00:11:42.714 { 00:11:42.714 "code": -32602, 00:11:42.714 "message": "Invalid cntlid range [0-65519]" 00:11:42.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16168 -i 65520 00:11:42.714 [2024-07-25 14:13:12.335670] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16168: invalid cntlid range [65520-65519] 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:42.714 { 00:11:42.714 "nqn": "nqn.2016-06.io.spdk:cnode16168", 00:11:42.714 "min_cntlid": 65520, 00:11:42.714 "method": "nvmf_create_subsystem", 00:11:42.714 "req_id": 1 00:11:42.714 } 00:11:42.714 Got JSON-RPC error response 00:11:42.714 response: 00:11:42.714 { 00:11:42.714 "code": -32602, 00:11:42.714 "message": "Invalid cntlid range [65520-65519]" 00:11:42.714 }' 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:42.714 { 00:11:42.714 "nqn": "nqn.2016-06.io.spdk:cnode16168", 00:11:42.714 "min_cntlid": 65520, 00:11:42.714 "method": "nvmf_create_subsystem", 00:11:42.714 "req_id": 1 00:11:42.714 } 00:11:42.714 Got JSON-RPC error response 00:11:42.714 response: 00:11:42.714 { 00:11:42.714 "code": -32602, 00:11:42.714 "message": "Invalid cntlid range [65520-65519]" 00:11:42.714 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.714 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4164 -I 0 00:11:42.972 [2024-07-25 14:13:12.588560] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4164: invalid cntlid range [1-0] 00:11:42.972 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:42.972 { 00:11:42.972 "nqn": "nqn.2016-06.io.spdk:cnode4164", 00:11:42.972 "max_cntlid": 0, 00:11:42.972 "method": "nvmf_create_subsystem", 00:11:42.972 "req_id": 1 00:11:42.972 } 00:11:42.972 Got JSON-RPC error response 00:11:42.972 response: 00:11:42.972 { 00:11:42.972 "code": -32602, 00:11:42.972 "message": "Invalid cntlid range [1-0]" 00:11:42.972 }' 00:11:42.972 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:42.972 { 00:11:42.972 "nqn": "nqn.2016-06.io.spdk:cnode4164", 00:11:42.972 "max_cntlid": 0, 00:11:42.972 "method": "nvmf_create_subsystem", 00:11:42.972 "req_id": 1 00:11:42.972 } 00:11:42.972 Got JSON-RPC error response 00:11:42.972 response: 00:11:42.972 { 00:11:42.972 "code": -32602, 00:11:42.972 "message": "Invalid cntlid range [1-0]" 00:11:42.972 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:42.972 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode240 -I 65520 00:11:43.229 [2024-07-25 14:13:12.849392] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode240: invalid cntlid range [1-65520] 00:11:43.229 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:43.229 { 00:11:43.229 "nqn": "nqn.2016-06.io.spdk:cnode240", 00:11:43.229 "max_cntlid": 65520, 00:11:43.229 "method": "nvmf_create_subsystem", 00:11:43.229 "req_id": 1 00:11:43.229 } 00:11:43.229 Got JSON-RPC error response 00:11:43.229 response: 00:11:43.229 { 00:11:43.229 "code": -32602, 00:11:43.229 "message": "Invalid cntlid range [1-65520]" 00:11:43.229 }' 00:11:43.230 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:43.230 { 00:11:43.230 "nqn": "nqn.2016-06.io.spdk:cnode240", 00:11:43.230 "max_cntlid": 65520, 00:11:43.230 "method": "nvmf_create_subsystem", 00:11:43.230 "req_id": 1 00:11:43.230 } 00:11:43.230 Got JSON-RPC error response 00:11:43.230 response: 00:11:43.230 { 00:11:43.230 "code": -32602, 00:11:43.230 "message": "Invalid cntlid range [1-65520]" 00:11:43.230 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.230 14:13:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21685 -i 6 -I 5 00:11:43.487 [2024-07-25 14:13:13.094221] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21685: invalid cntlid range [6-5] 00:11:43.487 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:43.487 { 00:11:43.487 "nqn": "nqn.2016-06.io.spdk:cnode21685", 00:11:43.487 "min_cntlid": 6, 00:11:43.487 "max_cntlid": 5, 00:11:43.487 "method": "nvmf_create_subsystem", 00:11:43.487 "req_id": 1 00:11:43.487 } 00:11:43.487 Got JSON-RPC error response 00:11:43.487 response: 00:11:43.487 { 00:11:43.487 "code": -32602, 00:11:43.487 "message": "Invalid cntlid range [6-5]" 00:11:43.487 }' 00:11:43.488 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:43.488 { 00:11:43.488 "nqn": "nqn.2016-06.io.spdk:cnode21685", 00:11:43.488 "min_cntlid": 6, 00:11:43.488 "max_cntlid": 5, 00:11:43.488 "method": "nvmf_create_subsystem", 00:11:43.488 "req_id": 1 00:11:43.488 } 00:11:43.488 Got JSON-RPC error response 00:11:43.488 response: 00:11:43.488 { 00:11:43.488 "code": -32602, 00:11:43.488 "message": "Invalid cntlid range [6-5]" 00:11:43.488 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:43.488 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:43.747 { 00:11:43.747 "name": "foobar", 00:11:43.747 "method": "nvmf_delete_target", 00:11:43.747 "req_id": 1 00:11:43.747 } 00:11:43.747 Got JSON-RPC error response 00:11:43.747 response: 00:11:43.747 { 00:11:43.747 "code": -32602, 00:11:43.747 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:43.747 }' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:43.747 { 00:11:43.747 "name": "foobar", 00:11:43.747 "method": "nvmf_delete_target", 00:11:43.747 "req_id": 1 00:11:43.747 } 00:11:43.747 Got JSON-RPC error response 00:11:43.747 response: 00:11:43.747 { 00:11:43.747 "code": -32602, 00:11:43.747 "message": "The specified target doesn't exist, cannot delete it." 00:11:43.747 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:43.747 rmmod nvme_tcp 00:11:43.747 rmmod nvme_fabrics 00:11:43.747 rmmod nvme_keyring 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 880907 ']' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 880907 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 880907 ']' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 880907 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 880907 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 880907' 00:11:43.747 killing process with pid 880907 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 880907 00:11:43.747 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 880907 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.005 14:13:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:46.542 00:11:46.542 real 0m8.752s 00:11:46.542 user 0m20.236s 00:11:46.542 sys 0m2.441s 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:46.542 ************************************ 00:11:46.542 END TEST nvmf_invalid 00:11:46.542 ************************************ 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.542 ************************************ 00:11:46.542 START TEST nvmf_connect_stress 00:11:46.542 ************************************ 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:46.542 * Looking for test storage... 00:11:46.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.542 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:46.543 14:13:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.447 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.447 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:48.447 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:48.447 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:48.448 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:48.448 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:48.448 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:48.448 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.448 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:48.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:48.449 00:11:48.449 --- 10.0.0.2 ping statistics --- 00:11:48.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.449 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:48.449 00:11:48.449 --- 10.0.0.1 ping statistics --- 00:11:48.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.449 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=883535 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 883535 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 883535 ']' 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.449 14:13:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.449 [2024-07-25 14:13:17.980334] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:11:48.449 [2024-07-25 14:13:17.980421] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.449 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.449 [2024-07-25 14:13:18.041395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:48.708 [2024-07-25 14:13:18.152708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.708 [2024-07-25 14:13:18.152758] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.708 [2024-07-25 14:13:18.152771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.708 [2024-07-25 14:13:18.152783] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.708 [2024-07-25 14:13:18.152793] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.708 [2024-07-25 14:13:18.152877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.708 [2024-07-25 14:13:18.152941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.708 [2024-07-25 14:13:18.152944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.708 [2024-07-25 14:13:18.290132] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.708 [2024-07-25 14:13:18.326276] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:48.708 NULL1 00:11:48.708 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=883625 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.709 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.967 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.227 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.227 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:49.227 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.227 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.227 14:13:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.485 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.486 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:49.486 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.486 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.486 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:49.743 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:49.743 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:49.743 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:49.743 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:49.743 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.309 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.309 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:50.309 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.309 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.309 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.568 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.568 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:50.568 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.568 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.568 14:13:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:50.828 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:50.828 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:50.828 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:50.828 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:50.828 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.088 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.088 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:51.088 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.088 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.088 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.348 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.348 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:51.349 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.349 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.349 14:13:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:51.916 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:51.916 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:51.916 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:51.916 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:51.916 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.176 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.176 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:52.176 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.176 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.176 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.436 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.436 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:52.436 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.436 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.436 14:13:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.696 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.696 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:52.696 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.696 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.696 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.954 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:52.954 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:52.954 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:52.955 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:52.955 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.523 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.523 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:53.523 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.523 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.523 14:13:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.783 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.783 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:53.783 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.783 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.783 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.043 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.043 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:54.043 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.043 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.043 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.301 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.301 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:54.301 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.301 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.301 14:13:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.559 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.559 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:54.559 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.559 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.559 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.128 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.129 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:55.129 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.129 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.129 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.387 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.387 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:55.387 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.387 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.387 14:13:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.647 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.647 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:55.647 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.647 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.647 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.905 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.905 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:55.905 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.905 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.905 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.164 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.164 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:56.164 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.164 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.164 14:13:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.735 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.735 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:56.735 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.735 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.735 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.994 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.994 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:56.994 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.994 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.994 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.253 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.253 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:57.253 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.253 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.253 14:13:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.512 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.512 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:57.512 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.512 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.512 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.770 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.770 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:57.770 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.770 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.770 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.341 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.341 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:58.341 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.341 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.341 14:13:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.599 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.599 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:58.599 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.599 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.599 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.876 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.876 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:58.876 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.876 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.876 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.153 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 883625 00:11:59.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (883625) - No such process 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 883625 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:59.153 rmmod nvme_tcp 00:11:59.153 rmmod nvme_fabrics 00:11:59.153 rmmod nvme_keyring 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 883535 ']' 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 883535 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 883535 ']' 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 883535 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 883535 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 883535' 00:11:59.153 killing process with pid 883535 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 883535 00:11:59.153 14:13:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 883535 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.412 14:13:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.951 00:12:01.951 real 0m15.404s 00:12:01.951 user 0m38.384s 00:12:01.951 sys 0m5.983s 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.951 ************************************ 00:12:01.951 END TEST nvmf_connect_stress 00:12:01.951 ************************************ 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.951 ************************************ 00:12:01.951 START TEST nvmf_fused_ordering 00:12:01.951 ************************************ 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:01.951 * Looking for test storage... 00:12:01.951 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.951 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.952 14:13:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:03.865 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:03.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:03.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:03.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:03.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:03.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:12:03.866 00:12:03.866 --- 10.0.0.2 ping statistics --- 00:12:03.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.866 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:12:03.866 00:12:03.866 --- 10.0.0.1 ping statistics --- 00:12:03.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.866 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.866 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=886822 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 886822 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 886822 ']' 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.867 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:03.867 [2024-07-25 14:13:33.424248] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:03.867 [2024-07-25 14:13:33.424329] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.867 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.867 [2024-07-25 14:13:33.485736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.129 [2024-07-25 14:13:33.585093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.129 [2024-07-25 14:13:33.585151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.129 [2024-07-25 14:13:33.585179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.129 [2024-07-25 14:13:33.585190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.129 [2024-07-25 14:13:33.585200] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.129 [2024-07-25 14:13:33.585233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 [2024-07-25 14:13:33.728454] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 [2024-07-25 14:13:33.744639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 NULL1 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.129 14:13:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:04.389 [2024-07-25 14:13:33.788637] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:04.389 [2024-07-25 14:13:33.788672] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid886842 ] 00:12:04.389 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.648 Attached to nqn.2016-06.io.spdk:cnode1 00:12:04.648 Namespace ID: 1 size: 1GB 00:12:04.648 fused_ordering(0) 00:12:04.648 fused_ordering(1) 00:12:04.648 fused_ordering(2) 00:12:04.648 fused_ordering(3) 00:12:04.648 fused_ordering(4) 00:12:04.648 fused_ordering(5) 00:12:04.648 fused_ordering(6) 00:12:04.648 fused_ordering(7) 00:12:04.648 fused_ordering(8) 00:12:04.648 fused_ordering(9) 00:12:04.648 fused_ordering(10) 00:12:04.648 fused_ordering(11) 00:12:04.648 fused_ordering(12) 00:12:04.648 fused_ordering(13) 00:12:04.648 fused_ordering(14) 00:12:04.648 fused_ordering(15) 00:12:04.648 fused_ordering(16) 00:12:04.648 fused_ordering(17) 00:12:04.648 fused_ordering(18) 00:12:04.648 fused_ordering(19) 00:12:04.648 fused_ordering(20) 00:12:04.648 fused_ordering(21) 00:12:04.648 fused_ordering(22) 00:12:04.648 fused_ordering(23) 00:12:04.648 fused_ordering(24) 00:12:04.648 fused_ordering(25) 00:12:04.648 fused_ordering(26) 00:12:04.648 fused_ordering(27) 00:12:04.649 fused_ordering(28) 00:12:04.649 fused_ordering(29) 00:12:04.649 fused_ordering(30) 00:12:04.649 fused_ordering(31) 00:12:04.649 fused_ordering(32) 00:12:04.649 fused_ordering(33) 00:12:04.649 fused_ordering(34) 00:12:04.649 fused_ordering(35) 00:12:04.649 fused_ordering(36) 00:12:04.649 fused_ordering(37) 00:12:04.649 fused_ordering(38) 00:12:04.649 fused_ordering(39) 00:12:04.649 fused_ordering(40) 00:12:04.649 fused_ordering(41) 00:12:04.649 fused_ordering(42) 00:12:04.649 fused_ordering(43) 00:12:04.649 fused_ordering(44) 00:12:04.649 fused_ordering(45) 00:12:04.649 fused_ordering(46) 00:12:04.649 fused_ordering(47) 00:12:04.649 fused_ordering(48) 00:12:04.649 fused_ordering(49) 00:12:04.649 fused_ordering(50) 00:12:04.649 fused_ordering(51) 00:12:04.649 fused_ordering(52) 00:12:04.649 fused_ordering(53) 00:12:04.649 fused_ordering(54) 00:12:04.649 fused_ordering(55) 00:12:04.649 fused_ordering(56) 00:12:04.649 fused_ordering(57) 00:12:04.649 fused_ordering(58) 00:12:04.649 fused_ordering(59) 00:12:04.649 fused_ordering(60) 00:12:04.649 fused_ordering(61) 00:12:04.649 fused_ordering(62) 00:12:04.649 fused_ordering(63) 00:12:04.649 fused_ordering(64) 00:12:04.649 fused_ordering(65) 00:12:04.649 fused_ordering(66) 00:12:04.649 fused_ordering(67) 00:12:04.649 fused_ordering(68) 00:12:04.649 fused_ordering(69) 00:12:04.649 fused_ordering(70) 00:12:04.649 fused_ordering(71) 00:12:04.649 fused_ordering(72) 00:12:04.649 fused_ordering(73) 00:12:04.649 fused_ordering(74) 00:12:04.649 fused_ordering(75) 00:12:04.649 fused_ordering(76) 00:12:04.649 fused_ordering(77) 00:12:04.649 fused_ordering(78) 00:12:04.649 fused_ordering(79) 00:12:04.649 fused_ordering(80) 00:12:04.649 fused_ordering(81) 00:12:04.649 fused_ordering(82) 00:12:04.649 fused_ordering(83) 00:12:04.649 fused_ordering(84) 00:12:04.649 fused_ordering(85) 00:12:04.649 fused_ordering(86) 00:12:04.649 fused_ordering(87) 00:12:04.649 fused_ordering(88) 00:12:04.649 fused_ordering(89) 00:12:04.649 fused_ordering(90) 00:12:04.649 fused_ordering(91) 00:12:04.649 fused_ordering(92) 00:12:04.649 fused_ordering(93) 00:12:04.649 fused_ordering(94) 00:12:04.649 fused_ordering(95) 00:12:04.649 fused_ordering(96) 00:12:04.649 fused_ordering(97) 00:12:04.649 fused_ordering(98) 00:12:04.649 fused_ordering(99) 00:12:04.649 fused_ordering(100) 00:12:04.649 fused_ordering(101) 00:12:04.649 fused_ordering(102) 00:12:04.649 fused_ordering(103) 00:12:04.649 fused_ordering(104) 00:12:04.649 fused_ordering(105) 00:12:04.649 fused_ordering(106) 00:12:04.649 fused_ordering(107) 00:12:04.649 fused_ordering(108) 00:12:04.649 fused_ordering(109) 00:12:04.649 fused_ordering(110) 00:12:04.649 fused_ordering(111) 00:12:04.649 fused_ordering(112) 00:12:04.649 fused_ordering(113) 00:12:04.649 fused_ordering(114) 00:12:04.649 fused_ordering(115) 00:12:04.649 fused_ordering(116) 00:12:04.649 fused_ordering(117) 00:12:04.649 fused_ordering(118) 00:12:04.649 fused_ordering(119) 00:12:04.649 fused_ordering(120) 00:12:04.649 fused_ordering(121) 00:12:04.649 fused_ordering(122) 00:12:04.649 fused_ordering(123) 00:12:04.649 fused_ordering(124) 00:12:04.649 fused_ordering(125) 00:12:04.649 fused_ordering(126) 00:12:04.649 fused_ordering(127) 00:12:04.649 fused_ordering(128) 00:12:04.649 fused_ordering(129) 00:12:04.649 fused_ordering(130) 00:12:04.649 fused_ordering(131) 00:12:04.649 fused_ordering(132) 00:12:04.649 fused_ordering(133) 00:12:04.649 fused_ordering(134) 00:12:04.649 fused_ordering(135) 00:12:04.649 fused_ordering(136) 00:12:04.649 fused_ordering(137) 00:12:04.649 fused_ordering(138) 00:12:04.649 fused_ordering(139) 00:12:04.649 fused_ordering(140) 00:12:04.649 fused_ordering(141) 00:12:04.649 fused_ordering(142) 00:12:04.649 fused_ordering(143) 00:12:04.649 fused_ordering(144) 00:12:04.649 fused_ordering(145) 00:12:04.649 fused_ordering(146) 00:12:04.649 fused_ordering(147) 00:12:04.649 fused_ordering(148) 00:12:04.649 fused_ordering(149) 00:12:04.649 fused_ordering(150) 00:12:04.649 fused_ordering(151) 00:12:04.649 fused_ordering(152) 00:12:04.649 fused_ordering(153) 00:12:04.649 fused_ordering(154) 00:12:04.649 fused_ordering(155) 00:12:04.649 fused_ordering(156) 00:12:04.649 fused_ordering(157) 00:12:04.649 fused_ordering(158) 00:12:04.649 fused_ordering(159) 00:12:04.649 fused_ordering(160) 00:12:04.649 fused_ordering(161) 00:12:04.649 fused_ordering(162) 00:12:04.649 fused_ordering(163) 00:12:04.649 fused_ordering(164) 00:12:04.649 fused_ordering(165) 00:12:04.649 fused_ordering(166) 00:12:04.649 fused_ordering(167) 00:12:04.649 fused_ordering(168) 00:12:04.649 fused_ordering(169) 00:12:04.649 fused_ordering(170) 00:12:04.649 fused_ordering(171) 00:12:04.649 fused_ordering(172) 00:12:04.649 fused_ordering(173) 00:12:04.649 fused_ordering(174) 00:12:04.649 fused_ordering(175) 00:12:04.649 fused_ordering(176) 00:12:04.649 fused_ordering(177) 00:12:04.649 fused_ordering(178) 00:12:04.649 fused_ordering(179) 00:12:04.649 fused_ordering(180) 00:12:04.649 fused_ordering(181) 00:12:04.649 fused_ordering(182) 00:12:04.649 fused_ordering(183) 00:12:04.649 fused_ordering(184) 00:12:04.649 fused_ordering(185) 00:12:04.649 fused_ordering(186) 00:12:04.649 fused_ordering(187) 00:12:04.649 fused_ordering(188) 00:12:04.649 fused_ordering(189) 00:12:04.649 fused_ordering(190) 00:12:04.649 fused_ordering(191) 00:12:04.649 fused_ordering(192) 00:12:04.649 fused_ordering(193) 00:12:04.649 fused_ordering(194) 00:12:04.649 fused_ordering(195) 00:12:04.649 fused_ordering(196) 00:12:04.649 fused_ordering(197) 00:12:04.649 fused_ordering(198) 00:12:04.649 fused_ordering(199) 00:12:04.649 fused_ordering(200) 00:12:04.649 fused_ordering(201) 00:12:04.649 fused_ordering(202) 00:12:04.649 fused_ordering(203) 00:12:04.649 fused_ordering(204) 00:12:04.649 fused_ordering(205) 00:12:05.217 fused_ordering(206) 00:12:05.217 fused_ordering(207) 00:12:05.217 fused_ordering(208) 00:12:05.217 fused_ordering(209) 00:12:05.217 fused_ordering(210) 00:12:05.217 fused_ordering(211) 00:12:05.217 fused_ordering(212) 00:12:05.217 fused_ordering(213) 00:12:05.217 fused_ordering(214) 00:12:05.217 fused_ordering(215) 00:12:05.217 fused_ordering(216) 00:12:05.217 fused_ordering(217) 00:12:05.217 fused_ordering(218) 00:12:05.217 fused_ordering(219) 00:12:05.217 fused_ordering(220) 00:12:05.217 fused_ordering(221) 00:12:05.217 fused_ordering(222) 00:12:05.217 fused_ordering(223) 00:12:05.217 fused_ordering(224) 00:12:05.217 fused_ordering(225) 00:12:05.217 fused_ordering(226) 00:12:05.217 fused_ordering(227) 00:12:05.217 fused_ordering(228) 00:12:05.217 fused_ordering(229) 00:12:05.217 fused_ordering(230) 00:12:05.217 fused_ordering(231) 00:12:05.217 fused_ordering(232) 00:12:05.217 fused_ordering(233) 00:12:05.217 fused_ordering(234) 00:12:05.217 fused_ordering(235) 00:12:05.217 fused_ordering(236) 00:12:05.217 fused_ordering(237) 00:12:05.217 fused_ordering(238) 00:12:05.217 fused_ordering(239) 00:12:05.217 fused_ordering(240) 00:12:05.217 fused_ordering(241) 00:12:05.217 fused_ordering(242) 00:12:05.217 fused_ordering(243) 00:12:05.217 fused_ordering(244) 00:12:05.217 fused_ordering(245) 00:12:05.217 fused_ordering(246) 00:12:05.217 fused_ordering(247) 00:12:05.217 fused_ordering(248) 00:12:05.217 fused_ordering(249) 00:12:05.217 fused_ordering(250) 00:12:05.217 fused_ordering(251) 00:12:05.217 fused_ordering(252) 00:12:05.217 fused_ordering(253) 00:12:05.217 fused_ordering(254) 00:12:05.217 fused_ordering(255) 00:12:05.217 fused_ordering(256) 00:12:05.217 fused_ordering(257) 00:12:05.217 fused_ordering(258) 00:12:05.217 fused_ordering(259) 00:12:05.217 fused_ordering(260) 00:12:05.217 fused_ordering(261) 00:12:05.217 fused_ordering(262) 00:12:05.217 fused_ordering(263) 00:12:05.217 fused_ordering(264) 00:12:05.217 fused_ordering(265) 00:12:05.217 fused_ordering(266) 00:12:05.217 fused_ordering(267) 00:12:05.217 fused_ordering(268) 00:12:05.217 fused_ordering(269) 00:12:05.217 fused_ordering(270) 00:12:05.217 fused_ordering(271) 00:12:05.217 fused_ordering(272) 00:12:05.217 fused_ordering(273) 00:12:05.217 fused_ordering(274) 00:12:05.217 fused_ordering(275) 00:12:05.217 fused_ordering(276) 00:12:05.217 fused_ordering(277) 00:12:05.217 fused_ordering(278) 00:12:05.217 fused_ordering(279) 00:12:05.217 fused_ordering(280) 00:12:05.217 fused_ordering(281) 00:12:05.217 fused_ordering(282) 00:12:05.217 fused_ordering(283) 00:12:05.217 fused_ordering(284) 00:12:05.217 fused_ordering(285) 00:12:05.217 fused_ordering(286) 00:12:05.217 fused_ordering(287) 00:12:05.217 fused_ordering(288) 00:12:05.217 fused_ordering(289) 00:12:05.217 fused_ordering(290) 00:12:05.217 fused_ordering(291) 00:12:05.217 fused_ordering(292) 00:12:05.217 fused_ordering(293) 00:12:05.217 fused_ordering(294) 00:12:05.217 fused_ordering(295) 00:12:05.217 fused_ordering(296) 00:12:05.217 fused_ordering(297) 00:12:05.217 fused_ordering(298) 00:12:05.217 fused_ordering(299) 00:12:05.217 fused_ordering(300) 00:12:05.217 fused_ordering(301) 00:12:05.217 fused_ordering(302) 00:12:05.217 fused_ordering(303) 00:12:05.217 fused_ordering(304) 00:12:05.217 fused_ordering(305) 00:12:05.217 fused_ordering(306) 00:12:05.217 fused_ordering(307) 00:12:05.217 fused_ordering(308) 00:12:05.217 fused_ordering(309) 00:12:05.217 fused_ordering(310) 00:12:05.217 fused_ordering(311) 00:12:05.217 fused_ordering(312) 00:12:05.217 fused_ordering(313) 00:12:05.217 fused_ordering(314) 00:12:05.217 fused_ordering(315) 00:12:05.217 fused_ordering(316) 00:12:05.217 fused_ordering(317) 00:12:05.217 fused_ordering(318) 00:12:05.217 fused_ordering(319) 00:12:05.217 fused_ordering(320) 00:12:05.217 fused_ordering(321) 00:12:05.217 fused_ordering(322) 00:12:05.217 fused_ordering(323) 00:12:05.217 fused_ordering(324) 00:12:05.217 fused_ordering(325) 00:12:05.217 fused_ordering(326) 00:12:05.217 fused_ordering(327) 00:12:05.217 fused_ordering(328) 00:12:05.217 fused_ordering(329) 00:12:05.217 fused_ordering(330) 00:12:05.217 fused_ordering(331) 00:12:05.217 fused_ordering(332) 00:12:05.217 fused_ordering(333) 00:12:05.217 fused_ordering(334) 00:12:05.217 fused_ordering(335) 00:12:05.217 fused_ordering(336) 00:12:05.217 fused_ordering(337) 00:12:05.217 fused_ordering(338) 00:12:05.217 fused_ordering(339) 00:12:05.217 fused_ordering(340) 00:12:05.217 fused_ordering(341) 00:12:05.217 fused_ordering(342) 00:12:05.217 fused_ordering(343) 00:12:05.217 fused_ordering(344) 00:12:05.217 fused_ordering(345) 00:12:05.217 fused_ordering(346) 00:12:05.217 fused_ordering(347) 00:12:05.217 fused_ordering(348) 00:12:05.217 fused_ordering(349) 00:12:05.217 fused_ordering(350) 00:12:05.217 fused_ordering(351) 00:12:05.217 fused_ordering(352) 00:12:05.217 fused_ordering(353) 00:12:05.217 fused_ordering(354) 00:12:05.217 fused_ordering(355) 00:12:05.217 fused_ordering(356) 00:12:05.217 fused_ordering(357) 00:12:05.217 fused_ordering(358) 00:12:05.217 fused_ordering(359) 00:12:05.217 fused_ordering(360) 00:12:05.217 fused_ordering(361) 00:12:05.217 fused_ordering(362) 00:12:05.217 fused_ordering(363) 00:12:05.217 fused_ordering(364) 00:12:05.217 fused_ordering(365) 00:12:05.217 fused_ordering(366) 00:12:05.217 fused_ordering(367) 00:12:05.217 fused_ordering(368) 00:12:05.217 fused_ordering(369) 00:12:05.217 fused_ordering(370) 00:12:05.217 fused_ordering(371) 00:12:05.217 fused_ordering(372) 00:12:05.217 fused_ordering(373) 00:12:05.217 fused_ordering(374) 00:12:05.217 fused_ordering(375) 00:12:05.217 fused_ordering(376) 00:12:05.217 fused_ordering(377) 00:12:05.217 fused_ordering(378) 00:12:05.217 fused_ordering(379) 00:12:05.217 fused_ordering(380) 00:12:05.217 fused_ordering(381) 00:12:05.217 fused_ordering(382) 00:12:05.217 fused_ordering(383) 00:12:05.217 fused_ordering(384) 00:12:05.217 fused_ordering(385) 00:12:05.217 fused_ordering(386) 00:12:05.217 fused_ordering(387) 00:12:05.217 fused_ordering(388) 00:12:05.217 fused_ordering(389) 00:12:05.218 fused_ordering(390) 00:12:05.218 fused_ordering(391) 00:12:05.218 fused_ordering(392) 00:12:05.218 fused_ordering(393) 00:12:05.218 fused_ordering(394) 00:12:05.218 fused_ordering(395) 00:12:05.218 fused_ordering(396) 00:12:05.218 fused_ordering(397) 00:12:05.218 fused_ordering(398) 00:12:05.218 fused_ordering(399) 00:12:05.218 fused_ordering(400) 00:12:05.218 fused_ordering(401) 00:12:05.218 fused_ordering(402) 00:12:05.218 fused_ordering(403) 00:12:05.218 fused_ordering(404) 00:12:05.218 fused_ordering(405) 00:12:05.218 fused_ordering(406) 00:12:05.218 fused_ordering(407) 00:12:05.218 fused_ordering(408) 00:12:05.218 fused_ordering(409) 00:12:05.218 fused_ordering(410) 00:12:05.477 fused_ordering(411) 00:12:05.477 fused_ordering(412) 00:12:05.477 fused_ordering(413) 00:12:05.477 fused_ordering(414) 00:12:05.477 fused_ordering(415) 00:12:05.477 fused_ordering(416) 00:12:05.477 fused_ordering(417) 00:12:05.477 fused_ordering(418) 00:12:05.477 fused_ordering(419) 00:12:05.477 fused_ordering(420) 00:12:05.477 fused_ordering(421) 00:12:05.477 fused_ordering(422) 00:12:05.477 fused_ordering(423) 00:12:05.477 fused_ordering(424) 00:12:05.477 fused_ordering(425) 00:12:05.477 fused_ordering(426) 00:12:05.477 fused_ordering(427) 00:12:05.477 fused_ordering(428) 00:12:05.477 fused_ordering(429) 00:12:05.477 fused_ordering(430) 00:12:05.477 fused_ordering(431) 00:12:05.477 fused_ordering(432) 00:12:05.477 fused_ordering(433) 00:12:05.477 fused_ordering(434) 00:12:05.477 fused_ordering(435) 00:12:05.477 fused_ordering(436) 00:12:05.477 fused_ordering(437) 00:12:05.477 fused_ordering(438) 00:12:05.477 fused_ordering(439) 00:12:05.477 fused_ordering(440) 00:12:05.477 fused_ordering(441) 00:12:05.477 fused_ordering(442) 00:12:05.477 fused_ordering(443) 00:12:05.477 fused_ordering(444) 00:12:05.477 fused_ordering(445) 00:12:05.477 fused_ordering(446) 00:12:05.477 fused_ordering(447) 00:12:05.477 fused_ordering(448) 00:12:05.477 fused_ordering(449) 00:12:05.477 fused_ordering(450) 00:12:05.477 fused_ordering(451) 00:12:05.477 fused_ordering(452) 00:12:05.477 fused_ordering(453) 00:12:05.477 fused_ordering(454) 00:12:05.477 fused_ordering(455) 00:12:05.477 fused_ordering(456) 00:12:05.477 fused_ordering(457) 00:12:05.477 fused_ordering(458) 00:12:05.477 fused_ordering(459) 00:12:05.477 fused_ordering(460) 00:12:05.477 fused_ordering(461) 00:12:05.477 fused_ordering(462) 00:12:05.477 fused_ordering(463) 00:12:05.477 fused_ordering(464) 00:12:05.477 fused_ordering(465) 00:12:05.477 fused_ordering(466) 00:12:05.477 fused_ordering(467) 00:12:05.477 fused_ordering(468) 00:12:05.477 fused_ordering(469) 00:12:05.477 fused_ordering(470) 00:12:05.477 fused_ordering(471) 00:12:05.477 fused_ordering(472) 00:12:05.477 fused_ordering(473) 00:12:05.477 fused_ordering(474) 00:12:05.477 fused_ordering(475) 00:12:05.477 fused_ordering(476) 00:12:05.477 fused_ordering(477) 00:12:05.477 fused_ordering(478) 00:12:05.477 fused_ordering(479) 00:12:05.477 fused_ordering(480) 00:12:05.477 fused_ordering(481) 00:12:05.477 fused_ordering(482) 00:12:05.477 fused_ordering(483) 00:12:05.477 fused_ordering(484) 00:12:05.477 fused_ordering(485) 00:12:05.477 fused_ordering(486) 00:12:05.477 fused_ordering(487) 00:12:05.477 fused_ordering(488) 00:12:05.477 fused_ordering(489) 00:12:05.477 fused_ordering(490) 00:12:05.477 fused_ordering(491) 00:12:05.477 fused_ordering(492) 00:12:05.477 fused_ordering(493) 00:12:05.477 fused_ordering(494) 00:12:05.477 fused_ordering(495) 00:12:05.477 fused_ordering(496) 00:12:05.477 fused_ordering(497) 00:12:05.477 fused_ordering(498) 00:12:05.477 fused_ordering(499) 00:12:05.477 fused_ordering(500) 00:12:05.477 fused_ordering(501) 00:12:05.477 fused_ordering(502) 00:12:05.477 fused_ordering(503) 00:12:05.477 fused_ordering(504) 00:12:05.477 fused_ordering(505) 00:12:05.477 fused_ordering(506) 00:12:05.477 fused_ordering(507) 00:12:05.478 fused_ordering(508) 00:12:05.478 fused_ordering(509) 00:12:05.478 fused_ordering(510) 00:12:05.478 fused_ordering(511) 00:12:05.478 fused_ordering(512) 00:12:05.478 fused_ordering(513) 00:12:05.478 fused_ordering(514) 00:12:05.478 fused_ordering(515) 00:12:05.478 fused_ordering(516) 00:12:05.478 fused_ordering(517) 00:12:05.478 fused_ordering(518) 00:12:05.478 fused_ordering(519) 00:12:05.478 fused_ordering(520) 00:12:05.478 fused_ordering(521) 00:12:05.478 fused_ordering(522) 00:12:05.478 fused_ordering(523) 00:12:05.478 fused_ordering(524) 00:12:05.478 fused_ordering(525) 00:12:05.478 fused_ordering(526) 00:12:05.478 fused_ordering(527) 00:12:05.478 fused_ordering(528) 00:12:05.478 fused_ordering(529) 00:12:05.478 fused_ordering(530) 00:12:05.478 fused_ordering(531) 00:12:05.478 fused_ordering(532) 00:12:05.478 fused_ordering(533) 00:12:05.478 fused_ordering(534) 00:12:05.478 fused_ordering(535) 00:12:05.478 fused_ordering(536) 00:12:05.478 fused_ordering(537) 00:12:05.478 fused_ordering(538) 00:12:05.478 fused_ordering(539) 00:12:05.478 fused_ordering(540) 00:12:05.478 fused_ordering(541) 00:12:05.478 fused_ordering(542) 00:12:05.478 fused_ordering(543) 00:12:05.478 fused_ordering(544) 00:12:05.478 fused_ordering(545) 00:12:05.478 fused_ordering(546) 00:12:05.478 fused_ordering(547) 00:12:05.478 fused_ordering(548) 00:12:05.478 fused_ordering(549) 00:12:05.478 fused_ordering(550) 00:12:05.478 fused_ordering(551) 00:12:05.478 fused_ordering(552) 00:12:05.478 fused_ordering(553) 00:12:05.478 fused_ordering(554) 00:12:05.478 fused_ordering(555) 00:12:05.478 fused_ordering(556) 00:12:05.478 fused_ordering(557) 00:12:05.478 fused_ordering(558) 00:12:05.478 fused_ordering(559) 00:12:05.478 fused_ordering(560) 00:12:05.478 fused_ordering(561) 00:12:05.478 fused_ordering(562) 00:12:05.478 fused_ordering(563) 00:12:05.478 fused_ordering(564) 00:12:05.478 fused_ordering(565) 00:12:05.478 fused_ordering(566) 00:12:05.478 fused_ordering(567) 00:12:05.478 fused_ordering(568) 00:12:05.478 fused_ordering(569) 00:12:05.478 fused_ordering(570) 00:12:05.478 fused_ordering(571) 00:12:05.478 fused_ordering(572) 00:12:05.478 fused_ordering(573) 00:12:05.478 fused_ordering(574) 00:12:05.478 fused_ordering(575) 00:12:05.478 fused_ordering(576) 00:12:05.478 fused_ordering(577) 00:12:05.478 fused_ordering(578) 00:12:05.478 fused_ordering(579) 00:12:05.478 fused_ordering(580) 00:12:05.478 fused_ordering(581) 00:12:05.478 fused_ordering(582) 00:12:05.478 fused_ordering(583) 00:12:05.478 fused_ordering(584) 00:12:05.478 fused_ordering(585) 00:12:05.478 fused_ordering(586) 00:12:05.478 fused_ordering(587) 00:12:05.478 fused_ordering(588) 00:12:05.478 fused_ordering(589) 00:12:05.478 fused_ordering(590) 00:12:05.478 fused_ordering(591) 00:12:05.478 fused_ordering(592) 00:12:05.478 fused_ordering(593) 00:12:05.478 fused_ordering(594) 00:12:05.478 fused_ordering(595) 00:12:05.478 fused_ordering(596) 00:12:05.478 fused_ordering(597) 00:12:05.478 fused_ordering(598) 00:12:05.478 fused_ordering(599) 00:12:05.478 fused_ordering(600) 00:12:05.478 fused_ordering(601) 00:12:05.478 fused_ordering(602) 00:12:05.478 fused_ordering(603) 00:12:05.478 fused_ordering(604) 00:12:05.478 fused_ordering(605) 00:12:05.478 fused_ordering(606) 00:12:05.478 fused_ordering(607) 00:12:05.478 fused_ordering(608) 00:12:05.478 fused_ordering(609) 00:12:05.478 fused_ordering(610) 00:12:05.478 fused_ordering(611) 00:12:05.478 fused_ordering(612) 00:12:05.478 fused_ordering(613) 00:12:05.478 fused_ordering(614) 00:12:05.478 fused_ordering(615) 00:12:06.044 fused_ordering(616) 00:12:06.044 fused_ordering(617) 00:12:06.044 fused_ordering(618) 00:12:06.044 fused_ordering(619) 00:12:06.044 fused_ordering(620) 00:12:06.044 fused_ordering(621) 00:12:06.044 fused_ordering(622) 00:12:06.044 fused_ordering(623) 00:12:06.044 fused_ordering(624) 00:12:06.044 fused_ordering(625) 00:12:06.044 fused_ordering(626) 00:12:06.044 fused_ordering(627) 00:12:06.044 fused_ordering(628) 00:12:06.044 fused_ordering(629) 00:12:06.044 fused_ordering(630) 00:12:06.044 fused_ordering(631) 00:12:06.044 fused_ordering(632) 00:12:06.044 fused_ordering(633) 00:12:06.044 fused_ordering(634) 00:12:06.044 fused_ordering(635) 00:12:06.044 fused_ordering(636) 00:12:06.044 fused_ordering(637) 00:12:06.044 fused_ordering(638) 00:12:06.045 fused_ordering(639) 00:12:06.045 fused_ordering(640) 00:12:06.045 fused_ordering(641) 00:12:06.045 fused_ordering(642) 00:12:06.045 fused_ordering(643) 00:12:06.045 fused_ordering(644) 00:12:06.045 fused_ordering(645) 00:12:06.045 fused_ordering(646) 00:12:06.045 fused_ordering(647) 00:12:06.045 fused_ordering(648) 00:12:06.045 fused_ordering(649) 00:12:06.045 fused_ordering(650) 00:12:06.045 fused_ordering(651) 00:12:06.045 fused_ordering(652) 00:12:06.045 fused_ordering(653) 00:12:06.045 fused_ordering(654) 00:12:06.045 fused_ordering(655) 00:12:06.045 fused_ordering(656) 00:12:06.045 fused_ordering(657) 00:12:06.045 fused_ordering(658) 00:12:06.045 fused_ordering(659) 00:12:06.045 fused_ordering(660) 00:12:06.045 fused_ordering(661) 00:12:06.045 fused_ordering(662) 00:12:06.045 fused_ordering(663) 00:12:06.045 fused_ordering(664) 00:12:06.045 fused_ordering(665) 00:12:06.045 fused_ordering(666) 00:12:06.045 fused_ordering(667) 00:12:06.045 fused_ordering(668) 00:12:06.045 fused_ordering(669) 00:12:06.045 fused_ordering(670) 00:12:06.045 fused_ordering(671) 00:12:06.045 fused_ordering(672) 00:12:06.045 fused_ordering(673) 00:12:06.045 fused_ordering(674) 00:12:06.045 fused_ordering(675) 00:12:06.045 fused_ordering(676) 00:12:06.045 fused_ordering(677) 00:12:06.045 fused_ordering(678) 00:12:06.045 fused_ordering(679) 00:12:06.045 fused_ordering(680) 00:12:06.045 fused_ordering(681) 00:12:06.045 fused_ordering(682) 00:12:06.045 fused_ordering(683) 00:12:06.045 fused_ordering(684) 00:12:06.045 fused_ordering(685) 00:12:06.045 fused_ordering(686) 00:12:06.045 fused_ordering(687) 00:12:06.045 fused_ordering(688) 00:12:06.045 fused_ordering(689) 00:12:06.045 fused_ordering(690) 00:12:06.045 fused_ordering(691) 00:12:06.045 fused_ordering(692) 00:12:06.045 fused_ordering(693) 00:12:06.045 fused_ordering(694) 00:12:06.045 fused_ordering(695) 00:12:06.045 fused_ordering(696) 00:12:06.045 fused_ordering(697) 00:12:06.045 fused_ordering(698) 00:12:06.045 fused_ordering(699) 00:12:06.045 fused_ordering(700) 00:12:06.045 fused_ordering(701) 00:12:06.045 fused_ordering(702) 00:12:06.045 fused_ordering(703) 00:12:06.045 fused_ordering(704) 00:12:06.045 fused_ordering(705) 00:12:06.045 fused_ordering(706) 00:12:06.045 fused_ordering(707) 00:12:06.045 fused_ordering(708) 00:12:06.045 fused_ordering(709) 00:12:06.045 fused_ordering(710) 00:12:06.045 fused_ordering(711) 00:12:06.045 fused_ordering(712) 00:12:06.045 fused_ordering(713) 00:12:06.045 fused_ordering(714) 00:12:06.045 fused_ordering(715) 00:12:06.045 fused_ordering(716) 00:12:06.045 fused_ordering(717) 00:12:06.045 fused_ordering(718) 00:12:06.045 fused_ordering(719) 00:12:06.045 fused_ordering(720) 00:12:06.045 fused_ordering(721) 00:12:06.045 fused_ordering(722) 00:12:06.045 fused_ordering(723) 00:12:06.045 fused_ordering(724) 00:12:06.045 fused_ordering(725) 00:12:06.045 fused_ordering(726) 00:12:06.045 fused_ordering(727) 00:12:06.045 fused_ordering(728) 00:12:06.045 fused_ordering(729) 00:12:06.045 fused_ordering(730) 00:12:06.045 fused_ordering(731) 00:12:06.045 fused_ordering(732) 00:12:06.045 fused_ordering(733) 00:12:06.045 fused_ordering(734) 00:12:06.045 fused_ordering(735) 00:12:06.045 fused_ordering(736) 00:12:06.045 fused_ordering(737) 00:12:06.045 fused_ordering(738) 00:12:06.045 fused_ordering(739) 00:12:06.045 fused_ordering(740) 00:12:06.045 fused_ordering(741) 00:12:06.045 fused_ordering(742) 00:12:06.045 fused_ordering(743) 00:12:06.045 fused_ordering(744) 00:12:06.045 fused_ordering(745) 00:12:06.045 fused_ordering(746) 00:12:06.045 fused_ordering(747) 00:12:06.045 fused_ordering(748) 00:12:06.045 fused_ordering(749) 00:12:06.045 fused_ordering(750) 00:12:06.045 fused_ordering(751) 00:12:06.045 fused_ordering(752) 00:12:06.045 fused_ordering(753) 00:12:06.045 fused_ordering(754) 00:12:06.045 fused_ordering(755) 00:12:06.045 fused_ordering(756) 00:12:06.045 fused_ordering(757) 00:12:06.045 fused_ordering(758) 00:12:06.045 fused_ordering(759) 00:12:06.045 fused_ordering(760) 00:12:06.045 fused_ordering(761) 00:12:06.045 fused_ordering(762) 00:12:06.045 fused_ordering(763) 00:12:06.045 fused_ordering(764) 00:12:06.045 fused_ordering(765) 00:12:06.045 fused_ordering(766) 00:12:06.045 fused_ordering(767) 00:12:06.045 fused_ordering(768) 00:12:06.045 fused_ordering(769) 00:12:06.045 fused_ordering(770) 00:12:06.045 fused_ordering(771) 00:12:06.045 fused_ordering(772) 00:12:06.045 fused_ordering(773) 00:12:06.045 fused_ordering(774) 00:12:06.045 fused_ordering(775) 00:12:06.045 fused_ordering(776) 00:12:06.045 fused_ordering(777) 00:12:06.045 fused_ordering(778) 00:12:06.045 fused_ordering(779) 00:12:06.045 fused_ordering(780) 00:12:06.045 fused_ordering(781) 00:12:06.045 fused_ordering(782) 00:12:06.045 fused_ordering(783) 00:12:06.045 fused_ordering(784) 00:12:06.045 fused_ordering(785) 00:12:06.045 fused_ordering(786) 00:12:06.045 fused_ordering(787) 00:12:06.045 fused_ordering(788) 00:12:06.045 fused_ordering(789) 00:12:06.045 fused_ordering(790) 00:12:06.045 fused_ordering(791) 00:12:06.045 fused_ordering(792) 00:12:06.045 fused_ordering(793) 00:12:06.045 fused_ordering(794) 00:12:06.045 fused_ordering(795) 00:12:06.045 fused_ordering(796) 00:12:06.045 fused_ordering(797) 00:12:06.045 fused_ordering(798) 00:12:06.045 fused_ordering(799) 00:12:06.045 fused_ordering(800) 00:12:06.045 fused_ordering(801) 00:12:06.045 fused_ordering(802) 00:12:06.045 fused_ordering(803) 00:12:06.045 fused_ordering(804) 00:12:06.045 fused_ordering(805) 00:12:06.045 fused_ordering(806) 00:12:06.045 fused_ordering(807) 00:12:06.045 fused_ordering(808) 00:12:06.045 fused_ordering(809) 00:12:06.045 fused_ordering(810) 00:12:06.045 fused_ordering(811) 00:12:06.045 fused_ordering(812) 00:12:06.045 fused_ordering(813) 00:12:06.045 fused_ordering(814) 00:12:06.045 fused_ordering(815) 00:12:06.045 fused_ordering(816) 00:12:06.045 fused_ordering(817) 00:12:06.045 fused_ordering(818) 00:12:06.045 fused_ordering(819) 00:12:06.045 fused_ordering(820) 00:12:06.984 fused_ordering(821) 00:12:06.984 fused_ordering(822) 00:12:06.984 fused_ordering(823) 00:12:06.984 fused_ordering(824) 00:12:06.984 fused_ordering(825) 00:12:06.984 fused_ordering(826) 00:12:06.984 fused_ordering(827) 00:12:06.984 fused_ordering(828) 00:12:06.984 fused_ordering(829) 00:12:06.984 fused_ordering(830) 00:12:06.984 fused_ordering(831) 00:12:06.984 fused_ordering(832) 00:12:06.984 fused_ordering(833) 00:12:06.984 fused_ordering(834) 00:12:06.984 fused_ordering(835) 00:12:06.984 fused_ordering(836) 00:12:06.984 fused_ordering(837) 00:12:06.984 fused_ordering(838) 00:12:06.984 fused_ordering(839) 00:12:06.984 fused_ordering(840) 00:12:06.984 fused_ordering(841) 00:12:06.984 fused_ordering(842) 00:12:06.984 fused_ordering(843) 00:12:06.984 fused_ordering(844) 00:12:06.984 fused_ordering(845) 00:12:06.984 fused_ordering(846) 00:12:06.984 fused_ordering(847) 00:12:06.984 fused_ordering(848) 00:12:06.984 fused_ordering(849) 00:12:06.984 fused_ordering(850) 00:12:06.984 fused_ordering(851) 00:12:06.984 fused_ordering(852) 00:12:06.984 fused_ordering(853) 00:12:06.984 fused_ordering(854) 00:12:06.984 fused_ordering(855) 00:12:06.984 fused_ordering(856) 00:12:06.984 fused_ordering(857) 00:12:06.984 fused_ordering(858) 00:12:06.984 fused_ordering(859) 00:12:06.984 fused_ordering(860) 00:12:06.984 fused_ordering(861) 00:12:06.984 fused_ordering(862) 00:12:06.984 fused_ordering(863) 00:12:06.984 fused_ordering(864) 00:12:06.984 fused_ordering(865) 00:12:06.984 fused_ordering(866) 00:12:06.984 fused_ordering(867) 00:12:06.984 fused_ordering(868) 00:12:06.984 fused_ordering(869) 00:12:06.984 fused_ordering(870) 00:12:06.984 fused_ordering(871) 00:12:06.984 fused_ordering(872) 00:12:06.984 fused_ordering(873) 00:12:06.984 fused_ordering(874) 00:12:06.984 fused_ordering(875) 00:12:06.984 fused_ordering(876) 00:12:06.984 fused_ordering(877) 00:12:06.984 fused_ordering(878) 00:12:06.984 fused_ordering(879) 00:12:06.984 fused_ordering(880) 00:12:06.984 fused_ordering(881) 00:12:06.984 fused_ordering(882) 00:12:06.984 fused_ordering(883) 00:12:06.984 fused_ordering(884) 00:12:06.984 fused_ordering(885) 00:12:06.984 fused_ordering(886) 00:12:06.985 fused_ordering(887) 00:12:06.985 fused_ordering(888) 00:12:06.985 fused_ordering(889) 00:12:06.985 fused_ordering(890) 00:12:06.985 fused_ordering(891) 00:12:06.985 fused_ordering(892) 00:12:06.985 fused_ordering(893) 00:12:06.985 fused_ordering(894) 00:12:06.985 fused_ordering(895) 00:12:06.985 fused_ordering(896) 00:12:06.985 fused_ordering(897) 00:12:06.985 fused_ordering(898) 00:12:06.985 fused_ordering(899) 00:12:06.985 fused_ordering(900) 00:12:06.985 fused_ordering(901) 00:12:06.985 fused_ordering(902) 00:12:06.985 fused_ordering(903) 00:12:06.985 fused_ordering(904) 00:12:06.985 fused_ordering(905) 00:12:06.985 fused_ordering(906) 00:12:06.985 fused_ordering(907) 00:12:06.985 fused_ordering(908) 00:12:06.985 fused_ordering(909) 00:12:06.985 fused_ordering(910) 00:12:06.985 fused_ordering(911) 00:12:06.985 fused_ordering(912) 00:12:06.985 fused_ordering(913) 00:12:06.985 fused_ordering(914) 00:12:06.985 fused_ordering(915) 00:12:06.985 fused_ordering(916) 00:12:06.985 fused_ordering(917) 00:12:06.985 fused_ordering(918) 00:12:06.985 fused_ordering(919) 00:12:06.985 fused_ordering(920) 00:12:06.985 fused_ordering(921) 00:12:06.985 fused_ordering(922) 00:12:06.985 fused_ordering(923) 00:12:06.985 fused_ordering(924) 00:12:06.985 fused_ordering(925) 00:12:06.985 fused_ordering(926) 00:12:06.985 fused_ordering(927) 00:12:06.985 fused_ordering(928) 00:12:06.985 fused_ordering(929) 00:12:06.985 fused_ordering(930) 00:12:06.985 fused_ordering(931) 00:12:06.985 fused_ordering(932) 00:12:06.985 fused_ordering(933) 00:12:06.985 fused_ordering(934) 00:12:06.985 fused_ordering(935) 00:12:06.985 fused_ordering(936) 00:12:06.985 fused_ordering(937) 00:12:06.985 fused_ordering(938) 00:12:06.985 fused_ordering(939) 00:12:06.985 fused_ordering(940) 00:12:06.985 fused_ordering(941) 00:12:06.985 fused_ordering(942) 00:12:06.985 fused_ordering(943) 00:12:06.985 fused_ordering(944) 00:12:06.985 fused_ordering(945) 00:12:06.985 fused_ordering(946) 00:12:06.985 fused_ordering(947) 00:12:06.985 fused_ordering(948) 00:12:06.985 fused_ordering(949) 00:12:06.985 fused_ordering(950) 00:12:06.985 fused_ordering(951) 00:12:06.985 fused_ordering(952) 00:12:06.985 fused_ordering(953) 00:12:06.985 fused_ordering(954) 00:12:06.985 fused_ordering(955) 00:12:06.985 fused_ordering(956) 00:12:06.985 fused_ordering(957) 00:12:06.985 fused_ordering(958) 00:12:06.985 fused_ordering(959) 00:12:06.985 fused_ordering(960) 00:12:06.985 fused_ordering(961) 00:12:06.985 fused_ordering(962) 00:12:06.985 fused_ordering(963) 00:12:06.985 fused_ordering(964) 00:12:06.985 fused_ordering(965) 00:12:06.985 fused_ordering(966) 00:12:06.985 fused_ordering(967) 00:12:06.985 fused_ordering(968) 00:12:06.985 fused_ordering(969) 00:12:06.985 fused_ordering(970) 00:12:06.985 fused_ordering(971) 00:12:06.985 fused_ordering(972) 00:12:06.985 fused_ordering(973) 00:12:06.985 fused_ordering(974) 00:12:06.985 fused_ordering(975) 00:12:06.985 fused_ordering(976) 00:12:06.985 fused_ordering(977) 00:12:06.985 fused_ordering(978) 00:12:06.985 fused_ordering(979) 00:12:06.985 fused_ordering(980) 00:12:06.985 fused_ordering(981) 00:12:06.985 fused_ordering(982) 00:12:06.985 fused_ordering(983) 00:12:06.985 fused_ordering(984) 00:12:06.985 fused_ordering(985) 00:12:06.985 fused_ordering(986) 00:12:06.985 fused_ordering(987) 00:12:06.985 fused_ordering(988) 00:12:06.985 fused_ordering(989) 00:12:06.985 fused_ordering(990) 00:12:06.985 fused_ordering(991) 00:12:06.985 fused_ordering(992) 00:12:06.985 fused_ordering(993) 00:12:06.985 fused_ordering(994) 00:12:06.985 fused_ordering(995) 00:12:06.985 fused_ordering(996) 00:12:06.985 fused_ordering(997) 00:12:06.985 fused_ordering(998) 00:12:06.985 fused_ordering(999) 00:12:06.985 fused_ordering(1000) 00:12:06.985 fused_ordering(1001) 00:12:06.985 fused_ordering(1002) 00:12:06.985 fused_ordering(1003) 00:12:06.985 fused_ordering(1004) 00:12:06.985 fused_ordering(1005) 00:12:06.985 fused_ordering(1006) 00:12:06.985 fused_ordering(1007) 00:12:06.985 fused_ordering(1008) 00:12:06.985 fused_ordering(1009) 00:12:06.985 fused_ordering(1010) 00:12:06.985 fused_ordering(1011) 00:12:06.985 fused_ordering(1012) 00:12:06.985 fused_ordering(1013) 00:12:06.985 fused_ordering(1014) 00:12:06.985 fused_ordering(1015) 00:12:06.985 fused_ordering(1016) 00:12:06.985 fused_ordering(1017) 00:12:06.985 fused_ordering(1018) 00:12:06.985 fused_ordering(1019) 00:12:06.985 fused_ordering(1020) 00:12:06.985 fused_ordering(1021) 00:12:06.985 fused_ordering(1022) 00:12:06.985 fused_ordering(1023) 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.985 rmmod nvme_tcp 00:12:06.985 rmmod nvme_fabrics 00:12:06.985 rmmod nvme_keyring 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 886822 ']' 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 886822 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 886822 ']' 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 886822 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 886822 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 886822' 00:12:06.985 killing process with pid 886822 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 886822 00:12:06.985 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 886822 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.245 14:13:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.150 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.150 00:12:09.151 real 0m7.599s 00:12:09.151 user 0m5.288s 00:12:09.151 sys 0m3.187s 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.151 ************************************ 00:12:09.151 END TEST nvmf_fused_ordering 00:12:09.151 ************************************ 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.151 ************************************ 00:12:09.151 START TEST nvmf_ns_masking 00:12:09.151 ************************************ 00:12:09.151 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:09.410 * Looking for test storage... 00:12:09.410 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.410 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6244bb33-c55d-451b-baca-04afeacb475c 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=85c4ecf8-97ef-4827-80e9-e67ea2015f21 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1f9a1e09-1135-424f-bb53-a728a500fe2d 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.411 14:13:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:11.318 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:11.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:11.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:11.318 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:11.318 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:11.319 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:11.578 14:13:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:11.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:11.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:12:11.578 00:12:11.578 --- 10.0.0.2 ping statistics --- 00:12:11.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.578 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:11.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:11.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:11.578 00:12:11.578 --- 10.0.0.1 ping statistics --- 00:12:11.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:11.578 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=889129 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 889129 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 889129 ']' 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.578 14:13:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:11.578 [2024-07-25 14:13:41.102134] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:11.578 [2024-07-25 14:13:41.102218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:11.578 EAL: No free 2048 kB hugepages reported on node 1 00:12:11.578 [2024-07-25 14:13:41.180161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.842 [2024-07-25 14:13:41.313034] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:11.842 [2024-07-25 14:13:41.313130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:11.842 [2024-07-25 14:13:41.313165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:11.842 [2024-07-25 14:13:41.313181] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:11.842 [2024-07-25 14:13:41.313194] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:11.842 [2024-07-25 14:13:41.313247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:12.782 [2024-07-25 14:13:42.383633] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:12.782 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:13.040 Malloc1 00:12:13.040 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:13.298 Malloc2 00:12:13.298 14:13:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:13.556 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:13.814 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.074 [2024-07-25 14:13:43.631454] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.074 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:14.074 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1f9a1e09-1135-424f-bb53-a728a500fe2d -a 10.0.0.2 -s 4420 -i 4 00:12:14.333 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.333 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:14.333 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.333 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:14.333 14:13:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:16.237 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:16.238 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:16.238 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.238 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.238 [ 0]:0x1 00:12:16.238 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.238 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.497 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa70dfc001974647b33a0274d2daa1cf 00:12:16.497 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa70dfc001974647b33a0274d2daa1cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.497 14:13:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:16.757 [ 0]:0x1 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa70dfc001974647b33a0274d2daa1cf 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa70dfc001974647b33a0274d2daa1cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:16.757 [ 1]:0x2 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:16.757 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.016 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.274 14:13:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:17.533 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:17.533 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1f9a1e09-1135-424f-bb53-a728a500fe2d -a 10.0.0.2 -s 4420 -i 4 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:17.792 14:13:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:19.721 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:19.983 [ 0]:0x2 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:19.983 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:20.241 [ 0]:0x1 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa70dfc001974647b33a0274d2daa1cf 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa70dfc001974647b33a0274d2daa1cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:20.241 [ 1]:0x2 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.241 14:13:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:20.499 [ 0]:0x2 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:20.499 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.757 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1f9a1e09-1135-424f-bb53-a728a500fe2d -a 10.0.0.2 -s 4420 -i 4 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:21.015 14:13:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:22.916 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.174 [ 0]:0x1 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=aa70dfc001974647b33a0274d2daa1cf 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ aa70dfc001974647b33a0274d2daa1cf != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.174 [ 1]:0x2 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.174 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.432 [ 0]:0x2 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.432 14:13:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:23.432 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:23.691 [2024-07-25 14:13:53.240494] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:23.691 request: 00:12:23.691 { 00:12:23.691 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.691 "nsid": 2, 00:12:23.691 "host": "nqn.2016-06.io.spdk:host1", 00:12:23.691 "method": "nvmf_ns_remove_host", 00:12:23.691 "req_id": 1 00:12:23.691 } 00:12:23.691 Got JSON-RPC error response 00:12:23.691 response: 00:12:23.691 { 00:12:23.691 "code": -32602, 00:12:23.691 "message": "Invalid parameters" 00:12:23.691 } 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:23.691 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:23.692 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:23.949 [ 0]:0x2 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6857a0b4e54a4b76a0b2d27185f40ef1 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6857a0b4e54a4b76a0b2d27185f40ef1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=890716 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 890716 /var/tmp/host.sock 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 890716 ']' 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:23.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:23.949 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:23.949 [2024-07-25 14:13:53.577710] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:23.949 [2024-07-25 14:13:53.577787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid890716 ] 00:12:24.206 EAL: No free 2048 kB hugepages reported on node 1 00:12:24.206 [2024-07-25 14:13:53.643549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.206 [2024-07-25 14:13:53.755287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.464 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:24.464 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:24.464 14:13:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.722 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:24.980 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6244bb33-c55d-451b-baca-04afeacb475c 00:12:24.980 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:24.980 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6244BB33C55D451BBACA04AFEACB475C -i 00:12:25.238 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 85c4ecf8-97ef-4827-80e9-e67ea2015f21 00:12:25.238 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:25.238 14:13:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 85C4ECF897EF482780E9E67EA2015F21 -i 00:12:25.496 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:25.753 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:26.010 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:26.010 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:26.580 nvme0n1 00:12:26.580 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:26.580 14:13:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:26.839 nvme1n2 00:12:26.839 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:26.839 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:26.839 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:26.839 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:26.839 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:27.097 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:27.097 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:27.097 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:27.097 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:27.355 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6244bb33-c55d-451b-baca-04afeacb475c == \6\2\4\4\b\b\3\3\-\c\5\5\d\-\4\5\1\b\-\b\a\c\a\-\0\4\a\f\e\a\c\b\4\7\5\c ]] 00:12:27.355 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:27.355 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:27.355 14:13:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 85c4ecf8-97ef-4827-80e9-e67ea2015f21 == \8\5\c\4\e\c\f\8\-\9\7\e\f\-\4\8\2\7\-\8\0\e\9\-\e\6\7\e\a\2\0\1\5\f\2\1 ]] 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 890716 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 890716 ']' 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 890716 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 890716 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 890716' 00:12:27.613 killing process with pid 890716 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 890716 00:12:27.613 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 890716 00:12:27.871 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.129 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.129 rmmod nvme_tcp 00:12:28.129 rmmod nvme_fabrics 00:12:28.388 rmmod nvme_keyring 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 889129 ']' 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 889129 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 889129 ']' 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 889129 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889129 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889129' 00:12:28.388 killing process with pid 889129 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 889129 00:12:28.388 14:13:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 889129 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.648 14:13:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.556 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:30.556 00:12:30.556 real 0m21.425s 00:12:30.556 user 0m27.611s 00:12:30.556 sys 0m4.132s 00:12:30.556 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.556 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:30.556 ************************************ 00:12:30.556 END TEST nvmf_ns_masking 00:12:30.556 ************************************ 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.815 ************************************ 00:12:30.815 START TEST nvmf_nvme_cli 00:12:30.815 ************************************ 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:30.815 * Looking for test storage... 00:12:30.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.815 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.816 14:14:00 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:33.352 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:33.352 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:33.352 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:33.352 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:33.352 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:33.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:12:33.353 00:12:33.353 --- 10.0.0.2 ping statistics --- 00:12:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.353 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:33.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:12:33.353 00:12:33.353 --- 10.0.0.1 ping statistics --- 00:12:33.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.353 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=893244 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 893244 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 893244 ']' 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 [2024-07-25 14:14:02.619475] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:33.353 [2024-07-25 14:14:02.619552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.353 EAL: No free 2048 kB hugepages reported on node 1 00:12:33.353 [2024-07-25 14:14:02.684399] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.353 [2024-07-25 14:14:02.792650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.353 [2024-07-25 14:14:02.792704] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.353 [2024-07-25 14:14:02.792734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.353 [2024-07-25 14:14:02.792745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.353 [2024-07-25 14:14:02.792755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.353 [2024-07-25 14:14:02.792825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.353 [2024-07-25 14:14:02.792923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.353 [2024-07-25 14:14:02.792976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.353 [2024-07-25 14:14:02.792979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 [2024-07-25 14:14:02.956711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.353 Malloc0 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.353 14:14:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 Malloc1 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 [2024-07-25 14:14:03.040544] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:33.614 00:12:33.614 Discovery Log Number of Records 2, Generation counter 2 00:12:33.614 =====Discovery Log Entry 0====== 00:12:33.614 trtype: tcp 00:12:33.614 adrfam: ipv4 00:12:33.614 subtype: current discovery subsystem 00:12:33.614 treq: not required 00:12:33.614 portid: 0 00:12:33.614 trsvcid: 4420 00:12:33.614 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:33.614 traddr: 10.0.0.2 00:12:33.614 eflags: explicit discovery connections, duplicate discovery information 00:12:33.614 sectype: none 00:12:33.614 =====Discovery Log Entry 1====== 00:12:33.614 trtype: tcp 00:12:33.614 adrfam: ipv4 00:12:33.614 subtype: nvme subsystem 00:12:33.614 treq: not required 00:12:33.614 portid: 0 00:12:33.614 trsvcid: 4420 00:12:33.614 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:33.614 traddr: 10.0.0.2 00:12:33.614 eflags: none 00:12:33.614 sectype: none 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:33.614 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:34.552 14:14:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:36.452 /dev/nvme0n1 ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:36.452 14:14:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:36.452 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:36.453 rmmod nvme_tcp 00:12:36.453 rmmod nvme_fabrics 00:12:36.453 rmmod nvme_keyring 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 893244 ']' 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 893244 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 893244 ']' 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 893244 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:36.453 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 893244 00:12:36.712 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:36.712 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:36.712 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 893244' 00:12:36.712 killing process with pid 893244 00:12:36.712 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 893244 00:12:36.712 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 893244 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.972 14:14:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.879 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:38.879 00:12:38.879 real 0m8.244s 00:12:38.879 user 0m14.914s 00:12:38.879 sys 0m2.280s 00:12:38.879 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:38.879 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:38.879 ************************************ 00:12:38.879 END TEST nvmf_nvme_cli 00:12:38.879 ************************************ 00:12:38.879 14:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:38.879 14:14:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:38.880 14:14:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:38.880 14:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:38.880 14:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:38.880 14:14:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:39.138 ************************************ 00:12:39.138 START TEST nvmf_vfio_user 00:12:39.138 ************************************ 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:39.138 * Looking for test storage... 00:12:39.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=894086 00:12:39.138 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 894086' 00:12:39.139 Process pid: 894086 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 894086 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 894086 ']' 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.139 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:39.139 [2024-07-25 14:14:08.652319] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:39.139 [2024-07-25 14:14:08.652434] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.139 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.139 [2024-07-25 14:14:08.711469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.440 [2024-07-25 14:14:08.821120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.440 [2024-07-25 14:14:08.821184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.440 [2024-07-25 14:14:08.821211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.440 [2024-07-25 14:14:08.821223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.440 [2024-07-25 14:14:08.821233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.440 [2024-07-25 14:14:08.821304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.440 [2024-07-25 14:14:08.821400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.440 [2024-07-25 14:14:08.821468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.440 [2024-07-25 14:14:08.821471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.440 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.440 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:39.440 14:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:40.378 14:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:40.637 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:40.637 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:40.637 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:40.637 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:40.637 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:40.894 Malloc1 00:12:40.894 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:41.152 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:41.410 14:14:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:41.669 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:41.669 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:41.669 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:41.926 Malloc2 00:12:41.926 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:42.184 14:14:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:42.442 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:42.701 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:42.702 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:42.702 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:42.702 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:42.702 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:42.702 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:42.702 [2024-07-25 14:14:12.288307] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:12:42.702 [2024-07-25 14:14:12.288348] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894514 ] 00:12:42.702 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.702 [2024-07-25 14:14:12.321462] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:42.702 [2024-07-25 14:14:12.330535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.702 [2024-07-25 14:14:12.330562] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fae24166000 00:12:42.702 [2024-07-25 14:14:12.331528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.332523] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.333528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.334536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.335536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.336540] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.337548] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.338552] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:42.702 [2024-07-25 14:14:12.339561] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:42.702 [2024-07-25 14:14:12.339580] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fae2415b000 00:12:42.702 [2024-07-25 14:14:12.340695] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.964 [2024-07-25 14:14:12.355558] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:42.964 [2024-07-25 14:14:12.355616] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:42.964 [2024-07-25 14:14:12.360696] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.964 [2024-07-25 14:14:12.360755] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:42.965 [2024-07-25 14:14:12.360855] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:42.965 [2024-07-25 14:14:12.360888] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:42.965 [2024-07-25 14:14:12.360899] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:42.965 [2024-07-25 14:14:12.361688] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:42.965 [2024-07-25 14:14:12.361714] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:42.965 [2024-07-25 14:14:12.361727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:42.965 [2024-07-25 14:14:12.362695] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:42.965 [2024-07-25 14:14:12.362714] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:42.965 [2024-07-25 14:14:12.362728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.363702] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:42.965 [2024-07-25 14:14:12.363721] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.364709] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:42.965 [2024-07-25 14:14:12.364729] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:42.965 [2024-07-25 14:14:12.364742] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.364755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.364865] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:42.965 [2024-07-25 14:14:12.364873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.364882] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:42.965 [2024-07-25 14:14:12.365716] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:42.965 [2024-07-25 14:14:12.366716] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:42.965 [2024-07-25 14:14:12.367729] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.965 [2024-07-25 14:14:12.368727] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:42.965 [2024-07-25 14:14:12.368840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:42.965 [2024-07-25 14:14:12.369739] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:42.965 [2024-07-25 14:14:12.369757] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:42.965 [2024-07-25 14:14:12.369766] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.369790] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:42.965 [2024-07-25 14:14:12.369803] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.369834] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.965 [2024-07-25 14:14:12.369844] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.965 [2024-07-25 14:14:12.369851] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.965 [2024-07-25 14:14:12.369873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.369943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:42.965 [2024-07-25 14:14:12.369962] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:42.965 [2024-07-25 14:14:12.369970] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:42.965 [2024-07-25 14:14:12.369978] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:42.965 [2024-07-25 14:14:12.369986] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:42.965 [2024-07-25 14:14:12.369994] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:42.965 [2024-07-25 14:14:12.370001] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:42.965 [2024-07-25 14:14:12.370013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.370087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:42.965 [2024-07-25 14:14:12.370114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.965 [2024-07-25 14:14:12.370128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.965 [2024-07-25 14:14:12.370141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.965 [2024-07-25 14:14:12.370153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:42.965 [2024-07-25 14:14:12.370161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.370204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:42.965 [2024-07-25 14:14:12.370216] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:42.965 [2024-07-25 14:14:12.370225] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.370277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:42.965 [2024-07-25 14:14:12.370346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370379] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370394] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:42.965 [2024-07-25 14:14:12.370402] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:42.965 [2024-07-25 14:14:12.370408] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.965 [2024-07-25 14:14:12.370433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.370449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:42.965 [2024-07-25 14:14:12.370477] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:42.965 [2024-07-25 14:14:12.370494] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:42.965 [2024-07-25 14:14:12.370521] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.965 [2024-07-25 14:14:12.370529] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.965 [2024-07-25 14:14:12.370535] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.965 [2024-07-25 14:14:12.370544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.965 [2024-07-25 14:14:12.370573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370612] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370624] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:42.966 [2024-07-25 14:14:12.370632] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.966 [2024-07-25 14:14:12.370638] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.966 [2024-07-25 14:14:12.370647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370700] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370722] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370739] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:42.966 [2024-07-25 14:14:12.370746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:42.966 [2024-07-25 14:14:12.370755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:42.966 [2024-07-25 14:14:12.370786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370827] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370855] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.370919] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:42.966 [2024-07-25 14:14:12.370929] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:42.966 [2024-07-25 14:14:12.370935] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:42.966 [2024-07-25 14:14:12.370941] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:42.966 [2024-07-25 14:14:12.370946] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:42.966 [2024-07-25 14:14:12.370955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:42.966 [2024-07-25 14:14:12.370966] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:42.966 [2024-07-25 14:14:12.370974] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:42.966 [2024-07-25 14:14:12.370980] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.966 [2024-07-25 14:14:12.370988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.370999] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:42.966 [2024-07-25 14:14:12.371006] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:42.966 [2024-07-25 14:14:12.371012] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.966 [2024-07-25 14:14:12.371021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.371033] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:42.966 [2024-07-25 14:14:12.371056] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:42.966 [2024-07-25 14:14:12.371071] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:42.966 [2024-07-25 14:14:12.371081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:42.966 [2024-07-25 14:14:12.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.371115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.371136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:42.966 [2024-07-25 14:14:12.371148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:42.966 ===================================================== 00:12:42.966 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:42.966 ===================================================== 00:12:42.966 Controller Capabilities/Features 00:12:42.966 ================================ 00:12:42.966 Vendor ID: 4e58 00:12:42.966 Subsystem Vendor ID: 4e58 00:12:42.966 Serial Number: SPDK1 00:12:42.966 Model Number: SPDK bdev Controller 00:12:42.966 Firmware Version: 24.09 00:12:42.966 Recommended Arb Burst: 6 00:12:42.966 IEEE OUI Identifier: 8d 6b 50 00:12:42.966 Multi-path I/O 00:12:42.966 May have multiple subsystem ports: Yes 00:12:42.966 May have multiple controllers: Yes 00:12:42.966 Associated with SR-IOV VF: No 00:12:42.966 Max Data Transfer Size: 131072 00:12:42.966 Max Number of Namespaces: 32 00:12:42.966 Max Number of I/O Queues: 127 00:12:42.966 NVMe Specification Version (VS): 1.3 00:12:42.966 NVMe Specification Version (Identify): 1.3 00:12:42.966 Maximum Queue Entries: 256 00:12:42.966 Contiguous Queues Required: Yes 00:12:42.966 Arbitration Mechanisms Supported 00:12:42.966 Weighted Round Robin: Not Supported 00:12:42.966 Vendor Specific: Not Supported 00:12:42.966 Reset Timeout: 15000 ms 00:12:42.967 Doorbell Stride: 4 bytes 00:12:42.967 NVM Subsystem Reset: Not Supported 00:12:42.967 Command Sets Supported 00:12:42.967 NVM Command Set: Supported 00:12:42.967 Boot Partition: Not Supported 00:12:42.967 Memory Page Size Minimum: 4096 bytes 00:12:42.967 Memory Page Size Maximum: 4096 bytes 00:12:42.967 Persistent Memory Region: Not Supported 00:12:42.967 Optional Asynchronous Events Supported 00:12:42.967 Namespace Attribute Notices: Supported 00:12:42.967 Firmware Activation Notices: Not Supported 00:12:42.967 ANA Change Notices: Not Supported 00:12:42.967 PLE Aggregate Log Change Notices: Not Supported 00:12:42.967 LBA Status Info Alert Notices: Not Supported 00:12:42.967 EGE Aggregate Log Change Notices: Not Supported 00:12:42.967 Normal NVM Subsystem Shutdown event: Not Supported 00:12:42.967 Zone Descriptor Change Notices: Not Supported 00:12:42.967 Discovery Log Change Notices: Not Supported 00:12:42.967 Controller Attributes 00:12:42.967 128-bit Host Identifier: Supported 00:12:42.967 Non-Operational Permissive Mode: Not Supported 00:12:42.967 NVM Sets: Not Supported 00:12:42.967 Read Recovery Levels: Not Supported 00:12:42.967 Endurance Groups: Not Supported 00:12:42.967 Predictable Latency Mode: Not Supported 00:12:42.967 Traffic Based Keep ALive: Not Supported 00:12:42.967 Namespace Granularity: Not Supported 00:12:42.967 SQ Associations: Not Supported 00:12:42.967 UUID List: Not Supported 00:12:42.967 Multi-Domain Subsystem: Not Supported 00:12:42.967 Fixed Capacity Management: Not Supported 00:12:42.967 Variable Capacity Management: Not Supported 00:12:42.967 Delete Endurance Group: Not Supported 00:12:42.967 Delete NVM Set: Not Supported 00:12:42.967 Extended LBA Formats Supported: Not Supported 00:12:42.967 Flexible Data Placement Supported: Not Supported 00:12:42.967 00:12:42.967 Controller Memory Buffer Support 00:12:42.967 ================================ 00:12:42.967 Supported: No 00:12:42.967 00:12:42.967 Persistent Memory Region Support 00:12:42.967 ================================ 00:12:42.967 Supported: No 00:12:42.967 00:12:42.967 Admin Command Set Attributes 00:12:42.967 ============================ 00:12:42.967 Security Send/Receive: Not Supported 00:12:42.967 Format NVM: Not Supported 00:12:42.967 Firmware Activate/Download: Not Supported 00:12:42.967 Namespace Management: Not Supported 00:12:42.967 Device Self-Test: Not Supported 00:12:42.967 Directives: Not Supported 00:12:42.967 NVMe-MI: Not Supported 00:12:42.967 Virtualization Management: Not Supported 00:12:42.967 Doorbell Buffer Config: Not Supported 00:12:42.967 Get LBA Status Capability: Not Supported 00:12:42.967 Command & Feature Lockdown Capability: Not Supported 00:12:42.967 Abort Command Limit: 4 00:12:42.967 Async Event Request Limit: 4 00:12:42.967 Number of Firmware Slots: N/A 00:12:42.967 Firmware Slot 1 Read-Only: N/A 00:12:42.967 Firmware Activation Without Reset: N/A 00:12:42.967 Multiple Update Detection Support: N/A 00:12:42.967 Firmware Update Granularity: No Information Provided 00:12:42.967 Per-Namespace SMART Log: No 00:12:42.967 Asymmetric Namespace Access Log Page: Not Supported 00:12:42.967 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:42.967 Command Effects Log Page: Supported 00:12:42.967 Get Log Page Extended Data: Supported 00:12:42.967 Telemetry Log Pages: Not Supported 00:12:42.967 Persistent Event Log Pages: Not Supported 00:12:42.967 Supported Log Pages Log Page: May Support 00:12:42.967 Commands Supported & Effects Log Page: Not Supported 00:12:42.967 Feature Identifiers & Effects Log Page:May Support 00:12:42.967 NVMe-MI Commands & Effects Log Page: May Support 00:12:42.967 Data Area 4 for Telemetry Log: Not Supported 00:12:42.967 Error Log Page Entries Supported: 128 00:12:42.967 Keep Alive: Supported 00:12:42.967 Keep Alive Granularity: 10000 ms 00:12:42.967 00:12:42.967 NVM Command Set Attributes 00:12:42.967 ========================== 00:12:42.967 Submission Queue Entry Size 00:12:42.967 Max: 64 00:12:42.967 Min: 64 00:12:42.967 Completion Queue Entry Size 00:12:42.967 Max: 16 00:12:42.967 Min: 16 00:12:42.967 Number of Namespaces: 32 00:12:42.967 Compare Command: Supported 00:12:42.967 Write Uncorrectable Command: Not Supported 00:12:42.967 Dataset Management Command: Supported 00:12:42.967 Write Zeroes Command: Supported 00:12:42.967 Set Features Save Field: Not Supported 00:12:42.967 Reservations: Not Supported 00:12:42.967 Timestamp: Not Supported 00:12:42.967 Copy: Supported 00:12:42.967 Volatile Write Cache: Present 00:12:42.967 Atomic Write Unit (Normal): 1 00:12:42.967 Atomic Write Unit (PFail): 1 00:12:42.967 Atomic Compare & Write Unit: 1 00:12:42.967 Fused Compare & Write: Supported 00:12:42.967 Scatter-Gather List 00:12:42.967 SGL Command Set: Supported (Dword aligned) 00:12:42.967 SGL Keyed: Not Supported 00:12:42.967 SGL Bit Bucket Descriptor: Not Supported 00:12:42.967 SGL Metadata Pointer: Not Supported 00:12:42.967 Oversized SGL: Not Supported 00:12:42.967 SGL Metadata Address: Not Supported 00:12:42.968 SGL Offset: Not Supported 00:12:42.968 Transport SGL Data Block: Not Supported 00:12:42.968 Replay Protected Memory Block: Not Supported 00:12:42.968 00:12:42.968 Firmware Slot Information 00:12:42.968 ========================= 00:12:42.968 Active slot: 1 00:12:42.968 Slot 1 Firmware Revision: 24.09 00:12:42.968 00:12:42.968 00:12:42.968 Commands Supported and Effects 00:12:42.968 ============================== 00:12:42.968 Admin Commands 00:12:42.968 -------------- 00:12:42.968 Get Log Page (02h): Supported 00:12:42.968 Identify (06h): Supported 00:12:42.968 Abort (08h): Supported 00:12:42.968 Set Features (09h): Supported 00:12:42.968 Get Features (0Ah): Supported 00:12:42.968 Asynchronous Event Request (0Ch): Supported 00:12:42.968 Keep Alive (18h): Supported 00:12:42.968 I/O Commands 00:12:42.968 ------------ 00:12:42.968 Flush (00h): Supported LBA-Change 00:12:42.968 Write (01h): Supported LBA-Change 00:12:42.968 Read (02h): Supported 00:12:42.968 Compare (05h): Supported 00:12:42.968 Write Zeroes (08h): Supported LBA-Change 00:12:42.968 Dataset Management (09h): Supported LBA-Change 00:12:42.968 Copy (19h): Supported LBA-Change 00:12:42.968 00:12:42.968 Error Log 00:12:42.968 ========= 00:12:42.968 00:12:42.968 Arbitration 00:12:42.968 =========== 00:12:42.968 Arbitration Burst: 1 00:12:42.968 00:12:42.968 Power Management 00:12:42.968 ================ 00:12:42.968 Number of Power States: 1 00:12:42.968 Current Power State: Power State #0 00:12:42.968 Power State #0: 00:12:42.968 Max Power: 0.00 W 00:12:42.968 Non-Operational State: Operational 00:12:42.968 Entry Latency: Not Reported 00:12:42.968 Exit Latency: Not Reported 00:12:42.968 Relative Read Throughput: 0 00:12:42.968 Relative Read Latency: 0 00:12:42.968 Relative Write Throughput: 0 00:12:42.968 Relative Write Latency: 0 00:12:42.968 Idle Power: Not Reported 00:12:42.968 Active Power: Not Reported 00:12:42.968 Non-Operational Permissive Mode: Not Supported 00:12:42.968 00:12:42.968 Health Information 00:12:42.968 ================== 00:12:42.968 Critical Warnings: 00:12:42.968 Available Spare Space: OK 00:12:42.968 Temperature: OK 00:12:42.968 Device Reliability: OK 00:12:42.968 Read Only: No 00:12:42.968 Volatile Memory Backup: OK 00:12:42.968 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:42.968 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:42.968 Available Spare: 0% 00:12:42.968 Available Sp[2024-07-25 14:14:12.371279] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:42.968 [2024-07-25 14:14:12.371295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:42.968 [2024-07-25 14:14:12.371359] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:42.968 [2024-07-25 14:14:12.371377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.968 [2024-07-25 14:14:12.371388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.968 [2024-07-25 14:14:12.371398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.968 [2024-07-25 14:14:12.371408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:42.968 [2024-07-25 14:14:12.375073] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:42.968 [2024-07-25 14:14:12.375098] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:42.968 [2024-07-25 14:14:12.375774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:42.968 [2024-07-25 14:14:12.375864] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:42.968 [2024-07-25 14:14:12.375878] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:42.968 [2024-07-25 14:14:12.376789] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:42.968 [2024-07-25 14:14:12.376813] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:42.968 [2024-07-25 14:14:12.376871] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:42.968 [2024-07-25 14:14:12.378824] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:42.968 are Threshold: 0% 00:12:42.968 Life Percentage Used: 0% 00:12:42.968 Data Units Read: 0 00:12:42.968 Data Units Written: 0 00:12:42.968 Host Read Commands: 0 00:12:42.968 Host Write Commands: 0 00:12:42.968 Controller Busy Time: 0 minutes 00:12:42.968 Power Cycles: 0 00:12:42.968 Power On Hours: 0 hours 00:12:42.968 Unsafe Shutdowns: 0 00:12:42.968 Unrecoverable Media Errors: 0 00:12:42.968 Lifetime Error Log Entries: 0 00:12:42.968 Warning Temperature Time: 0 minutes 00:12:42.968 Critical Temperature Time: 0 minutes 00:12:42.968 00:12:42.968 Number of Queues 00:12:42.968 ================ 00:12:42.968 Number of I/O Submission Queues: 127 00:12:42.968 Number of I/O Completion Queues: 127 00:12:42.968 00:12:42.968 Active Namespaces 00:12:42.968 ================= 00:12:42.968 Namespace ID:1 00:12:42.968 Error Recovery Timeout: Unlimited 00:12:42.969 Command Set Identifier: NVM (00h) 00:12:42.969 Deallocate: Supported 00:12:42.969 Deallocated/Unwritten Error: Not Supported 00:12:42.969 Deallocated Read Value: Unknown 00:12:42.969 Deallocate in Write Zeroes: Not Supported 00:12:42.969 Deallocated Guard Field: 0xFFFF 00:12:42.969 Flush: Supported 00:12:42.969 Reservation: Supported 00:12:42.969 Namespace Sharing Capabilities: Multiple Controllers 00:12:42.969 Size (in LBAs): 131072 (0GiB) 00:12:42.969 Capacity (in LBAs): 131072 (0GiB) 00:12:42.969 Utilization (in LBAs): 131072 (0GiB) 00:12:42.969 NGUID: E78EE0A5BE304F9DAF21DFB61EC04463 00:12:42.969 UUID: e78ee0a5-be30-4f9d-af21-dfb61ec04463 00:12:42.969 Thin Provisioning: Not Supported 00:12:42.969 Per-NS Atomic Units: Yes 00:12:42.969 Atomic Boundary Size (Normal): 0 00:12:42.969 Atomic Boundary Size (PFail): 0 00:12:42.969 Atomic Boundary Offset: 0 00:12:42.969 Maximum Single Source Range Length: 65535 00:12:42.969 Maximum Copy Length: 65535 00:12:42.969 Maximum Source Range Count: 1 00:12:42.969 NGUID/EUI64 Never Reused: No 00:12:42.969 Namespace Write Protected: No 00:12:42.969 Number of LBA Formats: 1 00:12:42.969 Current LBA Format: LBA Format #00 00:12:42.969 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:42.969 00:12:42.969 14:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:42.969 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.969 [2024-07-25 14:14:12.607874] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.245 Initializing NVMe Controllers 00:12:48.245 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.245 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:48.245 Initialization complete. Launching workers. 00:12:48.245 ======================================================== 00:12:48.245 Latency(us) 00:12:48.245 Device Information : IOPS MiB/s Average min max 00:12:48.245 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33185.31 129.63 3856.52 1199.13 7458.13 00:12:48.245 ======================================================== 00:12:48.245 Total : 33185.31 129.63 3856.52 1199.13 7458.13 00:12:48.245 00:12:48.245 [2024-07-25 14:14:17.629444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.245 14:14:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:48.245 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.245 [2024-07-25 14:14:17.873571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.524 Initializing NVMe Controllers 00:12:53.524 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:53.524 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:53.524 Initialization complete. Launching workers. 00:12:53.524 ======================================================== 00:12:53.524 Latency(us) 00:12:53.524 Device Information : IOPS MiB/s Average min max 00:12:53.524 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15986.80 62.45 8013.42 5997.74 15822.84 00:12:53.524 ======================================================== 00:12:53.524 Total : 15986.80 62.45 8013.42 5997.74 15822.84 00:12:53.524 00:12:53.524 [2024-07-25 14:14:22.912259] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.524 14:14:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:53.524 EAL: No free 2048 kB hugepages reported on node 1 00:12:53.524 [2024-07-25 14:14:23.123325] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:58.794 [2024-07-25 14:14:28.187373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:58.794 Initializing NVMe Controllers 00:12:58.794 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.794 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:58.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:58.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:58.794 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:58.794 Initialization complete. Launching workers. 00:12:58.794 Starting thread on core 2 00:12:58.794 Starting thread on core 3 00:12:58.794 Starting thread on core 1 00:12:58.794 14:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:58.794 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.052 [2024-07-25 14:14:28.492611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.338 [2024-07-25 14:14:31.632497] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.338 Initializing NVMe Controllers 00:13:02.338 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.338 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.338 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:02.338 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:02.338 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:02.338 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:02.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:02.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:02.338 Initialization complete. Launching workers. 00:13:02.338 Starting thread on core 1 with urgent priority queue 00:13:02.338 Starting thread on core 2 with urgent priority queue 00:13:02.338 Starting thread on core 3 with urgent priority queue 00:13:02.338 Starting thread on core 0 with urgent priority queue 00:13:02.338 SPDK bdev Controller (SPDK1 ) core 0: 2394.33 IO/s 41.77 secs/100000 ios 00:13:02.338 SPDK bdev Controller (SPDK1 ) core 1: 2486.67 IO/s 40.21 secs/100000 ios 00:13:02.338 SPDK bdev Controller (SPDK1 ) core 2: 2488.67 IO/s 40.18 secs/100000 ios 00:13:02.338 SPDK bdev Controller (SPDK1 ) core 3: 2452.67 IO/s 40.77 secs/100000 ios 00:13:02.338 ======================================================== 00:13:02.338 00:13:02.338 14:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.338 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.338 [2024-07-25 14:14:31.923715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:02.338 Initializing NVMe Controllers 00:13:02.338 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.338 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:02.338 Namespace ID: 1 size: 0GB 00:13:02.338 Initialization complete. 00:13:02.338 INFO: using host memory buffer for IO 00:13:02.338 Hello world! 00:13:02.338 [2024-07-25 14:14:31.957315] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:02.599 14:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:02.599 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.858 [2024-07-25 14:14:32.254536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:03.794 Initializing NVMe Controllers 00:13:03.794 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.794 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:03.794 Initialization complete. Launching workers. 00:13:03.794 submit (in ns) avg, min, max = 7074.6, 3523.3, 4017865.6 00:13:03.794 complete (in ns) avg, min, max = 27467.6, 2063.3, 6993716.7 00:13:03.794 00:13:03.794 Submit histogram 00:13:03.794 ================ 00:13:03.794 Range in us Cumulative Count 00:13:03.794 3.508 - 3.532: 0.0077% ( 1) 00:13:03.794 3.532 - 3.556: 0.0232% ( 2) 00:13:03.794 3.556 - 3.579: 0.8188% ( 103) 00:13:03.794 3.579 - 3.603: 2.0085% ( 154) 00:13:03.794 3.603 - 3.627: 6.4117% ( 570) 00:13:03.794 3.627 - 3.650: 12.8544% ( 834) 00:13:03.794 3.650 - 3.674: 21.1433% ( 1073) 00:13:03.794 3.674 - 3.698: 29.8494% ( 1127) 00:13:03.794 3.698 - 3.721: 38.3623% ( 1102) 00:13:03.794 3.721 - 3.745: 44.9517% ( 853) 00:13:03.794 3.745 - 3.769: 50.7377% ( 749) 00:13:03.794 3.769 - 3.793: 55.7744% ( 652) 00:13:03.794 3.793 - 3.816: 59.4206% ( 472) 00:13:03.794 3.816 - 3.840: 63.1209% ( 479) 00:13:03.794 3.840 - 3.864: 66.8134% ( 478) 00:13:03.794 3.864 - 3.887: 70.5678% ( 486) 00:13:03.794 3.887 - 3.911: 74.6466% ( 528) 00:13:03.794 3.911 - 3.935: 78.6636% ( 520) 00:13:03.794 3.935 - 3.959: 82.1398% ( 450) 00:13:03.794 3.959 - 3.982: 84.9285% ( 361) 00:13:03.794 3.982 - 4.006: 87.2692% ( 303) 00:13:03.794 4.006 - 4.030: 88.9301% ( 215) 00:13:03.794 4.030 - 4.053: 90.1738% ( 161) 00:13:03.794 4.053 - 4.077: 91.3403% ( 151) 00:13:03.794 4.077 - 4.101: 92.2596% ( 119) 00:13:03.794 4.101 - 4.124: 93.2406% ( 127) 00:13:03.794 4.124 - 4.148: 94.2217% ( 127) 00:13:03.794 4.148 - 4.172: 94.9092% ( 89) 00:13:03.794 4.172 - 4.196: 95.5427% ( 82) 00:13:03.794 4.196 - 4.219: 95.9753% ( 56) 00:13:03.794 4.219 - 4.243: 96.2534% ( 36) 00:13:03.794 4.243 - 4.267: 96.4542% ( 26) 00:13:03.794 4.267 - 4.290: 96.6628% ( 27) 00:13:03.794 4.290 - 4.314: 96.7710% ( 14) 00:13:03.794 4.314 - 4.338: 96.9409% ( 22) 00:13:03.794 4.338 - 4.361: 97.0568% ( 15) 00:13:03.794 4.361 - 4.385: 97.1649% ( 14) 00:13:03.794 4.385 - 4.409: 97.2345% ( 9) 00:13:03.794 4.409 - 4.433: 97.3040% ( 9) 00:13:03.794 4.433 - 4.456: 97.3658% ( 8) 00:13:03.794 4.456 - 4.480: 97.3890% ( 3) 00:13:03.794 4.480 - 4.504: 97.4353% ( 6) 00:13:03.794 4.504 - 4.527: 97.4662% ( 4) 00:13:03.794 4.527 - 4.551: 97.4894% ( 3) 00:13:03.794 4.551 - 4.575: 97.5048% ( 2) 00:13:03.794 4.575 - 4.599: 97.5126% ( 1) 00:13:03.794 4.599 - 4.622: 97.5280% ( 2) 00:13:03.794 4.622 - 4.646: 97.5357% ( 1) 00:13:03.794 4.646 - 4.670: 97.5589% ( 3) 00:13:03.794 4.670 - 4.693: 97.5744% ( 2) 00:13:03.794 4.693 - 4.717: 97.6053% ( 4) 00:13:03.794 4.717 - 4.741: 97.6130% ( 1) 00:13:03.794 4.741 - 4.764: 97.6593% ( 6) 00:13:03.794 4.764 - 4.788: 97.6980% ( 5) 00:13:03.794 4.788 - 4.812: 97.7289% ( 4) 00:13:03.794 4.812 - 4.836: 97.7984% ( 9) 00:13:03.794 4.836 - 4.859: 97.8447% ( 6) 00:13:03.794 4.859 - 4.883: 97.8679% ( 3) 00:13:03.794 4.883 - 4.907: 97.8756% ( 1) 00:13:03.794 4.907 - 4.930: 97.9220% ( 6) 00:13:03.794 4.930 - 4.954: 97.9606% ( 5) 00:13:03.794 4.954 - 4.978: 97.9992% ( 5) 00:13:03.794 4.978 - 5.001: 98.0456% ( 6) 00:13:03.794 5.001 - 5.025: 98.0765% ( 4) 00:13:03.794 5.025 - 5.049: 98.1383% ( 8) 00:13:03.794 5.049 - 5.073: 98.1924% ( 7) 00:13:03.794 5.073 - 5.096: 98.2001% ( 1) 00:13:03.794 5.096 - 5.120: 98.2464% ( 6) 00:13:03.794 5.120 - 5.144: 98.2773% ( 4) 00:13:03.794 5.144 - 5.167: 98.2928% ( 2) 00:13:03.794 5.167 - 5.191: 98.3160% ( 3) 00:13:03.794 5.191 - 5.215: 98.3314% ( 2) 00:13:03.794 5.215 - 5.239: 98.3391% ( 1) 00:13:03.794 5.239 - 5.262: 98.3546% ( 2) 00:13:03.794 5.262 - 5.286: 98.3700% ( 2) 00:13:03.794 5.333 - 5.357: 98.3778% ( 1) 00:13:03.794 5.476 - 5.499: 98.3932% ( 2) 00:13:03.794 5.594 - 5.618: 98.4009% ( 1) 00:13:03.794 5.879 - 5.902: 98.4087% ( 1) 00:13:03.794 5.926 - 5.950: 98.4164% ( 1) 00:13:03.794 5.973 - 5.997: 98.4241% ( 1) 00:13:03.794 6.068 - 6.116: 98.4318% ( 1) 00:13:03.794 6.116 - 6.163: 98.4396% ( 1) 00:13:03.794 6.637 - 6.684: 98.4473% ( 1) 00:13:03.794 6.874 - 6.921: 98.4550% ( 1) 00:13:03.794 7.111 - 7.159: 98.4705% ( 2) 00:13:03.794 7.159 - 7.206: 98.4782% ( 1) 00:13:03.794 7.206 - 7.253: 98.4859% ( 1) 00:13:03.794 7.253 - 7.301: 98.4936% ( 1) 00:13:03.794 7.301 - 7.348: 98.5168% ( 3) 00:13:03.794 7.348 - 7.396: 98.5245% ( 1) 00:13:03.794 7.396 - 7.443: 98.5323% ( 1) 00:13:03.794 7.443 - 7.490: 98.5400% ( 1) 00:13:03.794 7.490 - 7.538: 98.5554% ( 2) 00:13:03.794 7.680 - 7.727: 98.5709% ( 2) 00:13:03.794 7.775 - 7.822: 98.5786% ( 1) 00:13:03.794 7.870 - 7.917: 98.5863% ( 1) 00:13:03.794 7.964 - 8.012: 98.5941% ( 1) 00:13:03.794 8.012 - 8.059: 98.6018% ( 1) 00:13:03.794 8.059 - 8.107: 98.6250% ( 3) 00:13:03.794 8.107 - 8.154: 98.6404% ( 2) 00:13:03.794 8.154 - 8.201: 98.6481% ( 1) 00:13:03.794 8.249 - 8.296: 98.6636% ( 2) 00:13:03.794 8.391 - 8.439: 98.6713% ( 1) 00:13:03.794 8.439 - 8.486: 98.6945% ( 3) 00:13:03.795 8.533 - 8.581: 98.7099% ( 2) 00:13:03.795 8.581 - 8.628: 98.7177% ( 1) 00:13:03.795 8.865 - 8.913: 98.7254% ( 1) 00:13:03.795 9.007 - 9.055: 98.7331% ( 1) 00:13:03.795 9.055 - 9.102: 98.7408% ( 1) 00:13:03.795 9.197 - 9.244: 98.7486% ( 1) 00:13:03.795 9.292 - 9.339: 98.7563% ( 1) 00:13:03.795 9.434 - 9.481: 98.7640% ( 1) 00:13:03.795 9.529 - 9.576: 98.7717% ( 1) 00:13:03.795 9.576 - 9.624: 98.7795% ( 1) 00:13:03.795 9.671 - 9.719: 98.7872% ( 1) 00:13:03.795 9.719 - 9.766: 98.7949% ( 1) 00:13:03.795 9.908 - 9.956: 98.8026% ( 1) 00:13:03.795 10.050 - 10.098: 98.8104% ( 1) 00:13:03.795 10.382 - 10.430: 98.8181% ( 1) 00:13:03.795 10.477 - 10.524: 98.8258% ( 1) 00:13:03.795 10.667 - 10.714: 98.8335% ( 1) 00:13:03.795 10.809 - 10.856: 98.8413% ( 1) 00:13:03.795 10.999 - 11.046: 98.8490% ( 1) 00:13:03.795 11.662 - 11.710: 98.8567% ( 1) 00:13:03.795 11.947 - 11.994: 98.8644% ( 1) 00:13:03.795 11.994 - 12.041: 98.8722% ( 1) 00:13:03.795 12.516 - 12.610: 98.8799% ( 1) 00:13:03.795 12.610 - 12.705: 98.8876% ( 1) 00:13:03.795 12.800 - 12.895: 98.8953% ( 1) 00:13:03.795 12.895 - 12.990: 98.9031% ( 1) 00:13:03.795 12.990 - 13.084: 98.9108% ( 1) 00:13:03.795 13.084 - 13.179: 98.9185% ( 1) 00:13:03.795 13.179 - 13.274: 98.9262% ( 1) 00:13:03.795 13.748 - 13.843: 98.9340% ( 1) 00:13:03.795 14.507 - 14.601: 98.9417% ( 1) 00:13:03.795 14.601 - 14.696: 98.9494% ( 1) 00:13:03.795 14.696 - 14.791: 98.9571% ( 1) 00:13:03.795 17.161 - 17.256: 98.9649% ( 1) 00:13:03.795 17.256 - 17.351: 98.9726% ( 1) 00:13:03.795 17.351 - 17.446: 98.9880% ( 2) 00:13:03.795 17.446 - 17.541: 99.0035% ( 2) 00:13:03.795 17.541 - 17.636: 99.0267% ( 3) 00:13:03.795 17.636 - 17.730: 99.0730% ( 6) 00:13:03.795 17.730 - 17.825: 99.1194% ( 6) 00:13:03.795 17.825 - 17.920: 99.2121% ( 12) 00:13:03.795 17.920 - 18.015: 99.2507% ( 5) 00:13:03.795 18.015 - 18.110: 99.2893% ( 5) 00:13:03.795 18.110 - 18.204: 99.3357% ( 6) 00:13:03.795 18.204 - 18.299: 99.4284% ( 12) 00:13:03.795 18.299 - 18.394: 99.4515% ( 3) 00:13:03.795 18.394 - 18.489: 99.5365% ( 11) 00:13:03.795 18.489 - 18.584: 99.6060% ( 9) 00:13:03.795 18.584 - 18.679: 99.6524% ( 6) 00:13:03.795 18.679 - 18.773: 99.7219% ( 9) 00:13:03.795 18.773 - 18.868: 99.7451% ( 3) 00:13:03.795 18.868 - 18.963: 99.7683% ( 3) 00:13:03.795 18.963 - 19.058: 99.7992% ( 4) 00:13:03.795 19.058 - 19.153: 99.8069% ( 1) 00:13:03.795 19.153 - 19.247: 99.8223% ( 2) 00:13:03.795 19.247 - 19.342: 99.8378% ( 2) 00:13:03.795 19.342 - 19.437: 99.8532% ( 2) 00:13:03.795 19.532 - 19.627: 99.8610% ( 1) 00:13:03.795 19.627 - 19.721: 99.8687% ( 1) 00:13:03.795 19.816 - 19.911: 99.8764% ( 1) 00:13:03.795 19.911 - 20.006: 99.8841% ( 1) 00:13:03.795 20.006 - 20.101: 99.8919% ( 1) 00:13:03.795 22.471 - 22.566: 99.8996% ( 1) 00:13:03.795 22.661 - 22.756: 99.9073% ( 1) 00:13:03.795 25.790 - 25.979: 99.9150% ( 1) 00:13:03.795 27.307 - 27.496: 99.9228% ( 1) 00:13:03.795 3980.705 - 4004.978: 99.9923% ( 9) 00:13:03.795 4004.978 - 4029.250: 100.0000% ( 1) 00:13:03.795 00:13:03.795 Complete histogram 00:13:03.795 ================== 00:13:03.795 Range in us Cumulative Count 00:13:03.795 2.062 - 2.074: 5.6701% ( 734) 00:13:03.795 2.074 - 2.086: 43.0668% ( 4841) 00:13:03.795 2.086 - 2.098: 48.5670% ( 712) 00:13:03.795 2.098 - 2.110: 53.2175% ( 602) 00:13:03.795 2.110 - 2.121: 60.7184% ( 971) 00:13:03.795 2.121 - 2.133: 62.2789% ( 202) 00:13:03.795 2.133 - 2.145: 68.4511% ( 799) 00:13:03.795 2.145 - 2.157: 76.2843% ( 1014) 00:13:03.795 2.157 - 2.169: 77.1418% ( 111) 00:13:03.795 2.169 - 2.181: 79.9459% ( 363) 00:13:03.795 2.181 - 2.193: 82.4565% ( 325) 00:13:03.795 2.193 - 2.204: 82.9741% ( 67) 00:13:03.795 2.204 - 2.216: 84.9672% ( 258) 00:13:03.795 2.216 - 2.228: 89.2391% ( 553) 00:13:03.795 2.228 - 2.240: 91.1317% ( 245) 00:13:03.795 2.240 - 2.252: 92.4681% ( 173) 00:13:03.795 2.252 - 2.264: 93.7505% ( 166) 00:13:03.795 2.264 - 2.276: 94.1058% ( 46) 00:13:03.795 2.276 - 2.287: 94.3144% ( 27) 00:13:03.795 2.287 - 2.299: 94.8474% ( 69) 00:13:03.795 2.299 - 2.311: 95.3805% ( 69) 00:13:03.795 2.311 - 2.323: 95.6045% ( 29) 00:13:03.795 2.323 - 2.335: 95.6431% ( 5) 00:13:03.795 2.335 - 2.347: 95.6895% ( 6) 00:13:03.795 2.347 - 2.359: 95.9212% ( 30) 00:13:03.795 2.359 - 2.370: 96.2457% ( 42) 00:13:03.795 2.370 - 2.382: 96.6474% ( 52) 00:13:03.795 2.382 - 2.394: 97.0800% ( 56) 00:13:03.795 2.394 - 2.406: 97.4121% ( 43) 00:13:03.795 2.406 - 2.418: 97.5744% ( 21) 00:13:03.795 2.418 - 2.430: 97.7289% ( 20) 00:13:03.795 2.430 - 2.441: 97.8447% ( 15) 00:13:03.795 2.441 - 2.453: 97.9683% ( 16) 00:13:03.795 2.453 - 2.465: 98.0842% ( 15) 00:13:03.795 2.465 - 2.477: 98.1537% ( 9) 00:13:03.795 2.477 - 2.489: 98.2619% ( 14) 00:13:03.795 2.489 - 2.501: 98.3391% ( 10) 00:13:03.795 2.501 - 2.513: 98.3546% ( 2) 00:13:03.795 2.513 - 2.524: 98.3700% ( 2) 00:13:03.795 2.524 - 2.536: 98.3778% ( 1) 00:13:03.795 2.548 - 2.560: 98.3855% ( 1) 00:13:03.795 2.560 - 2.572: 98.3932% ( 1) 00:13:03.795 2.572 - 2.584: 98.4009% ( 1) 00:13:03.795 2.596 - 2.607: 98.4087% ( 1) 00:13:03.795 2.607 - 2.619: 98.4241% ( 2) 00:13:03.795 2.643 - 2.655: 98.4318% ( 1) 00:13:03.795 2.655 - 2.667: 98.4396% ( 1) 00:13:03.795 2.667 - 2.679: 98.4473% ( 1) 00:13:03.795 2.690 - 2.702: 98.4550% ( 1) 00:13:03.795 2.738 - 2.750: 98.4627% ( 1) 00:13:03.795 2.773 - 2.785: 98.4705% ( 1) 00:13:03.795 2.868 - 2.880: 98.4782% ( 1) 00:13:03.795 3.224 - 3.247: 98.5014% ( 3) 00:13:03.795 3.271 - 3.295: 98.5245% ( 3) 00:13:03.795 3.295 - 3.319: 98.5400% ( 2) 00:13:03.795 3.319 - 3.342: 98.5554% ( 2) 00:13:03.795 3.342 - 3.366: 98.5709% ( 2) 00:13:03.795 3.366 - 3.390: 98.5786% ( 1) 00:13:03.795 3.390 - 3.413: 98.5863% ( 1) 00:13:03.795 3.413 - 3.437: 98.6018% ( 2) 00:13:03.795 3.437 - 3.461: 98.6250% ( 3) 00:13:03.795 3.461 - 3.484: 98.6404% ( 2) 00:13:03.795 3.484 - 3.508: 98.6559% ( 2) 00:13:03.795 3.508 - 3.532: 98.6636% ( 1) 00:13:03.795 3.579 - 3.603: 98.6713% ( 1) 00:13:03.795 3.603 - 3.627: 98.6868% ( 2) 00:13:03.795 3.698 - 3.721: 98.6945% ( 1) 00:13:03.795 3.745 - 3.769: 98.7022% ( 1) 00:13:03.795 3.840 - 3.864: 98.7099% ( 1) 00:13:03.795 3.864 - 3.887: 98.7177% ( 1) 00:13:03.795 3.887 - 3.911: 98.7254% ( 1) 00:13:03.795 5.262 - 5.286: 98.7331% ( 1) 00:13:03.795 5.404 - 5.428: 98.7408% ( 1) 00:13:03.795 5.547 - 5.570: 98.7486% ( 1) 00:13:03.795 5.713 - 5.736: 98.7563% ( 1) 00:13:03.795 6.068 - 6.116: 98.7640% ( 1) 00:13:03.795 6.163 - 6.210: 98.7717% ( 1) 00:13:03.795 6.210 - 6.258: 98.7795% ( 1) 00:13:03.795 6.353 - 6.400: 9[2024-07-25 14:14:33.276708] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.795 8.7872% ( 1) 00:13:03.795 6.447 - 6.495: 98.7949% ( 1) 00:13:03.795 6.732 - 6.779: 98.8026% ( 1) 00:13:03.795 6.827 - 6.874: 98.8104% ( 1) 00:13:03.795 10.287 - 10.335: 98.8181% ( 1) 00:13:03.795 11.615 - 11.662: 98.8258% ( 1) 00:13:03.795 11.852 - 11.899: 98.8335% ( 1) 00:13:03.795 15.644 - 15.739: 98.8490% ( 2) 00:13:03.795 15.739 - 15.834: 98.8722% ( 3) 00:13:03.795 15.834 - 15.929: 98.8799% ( 1) 00:13:03.795 15.929 - 16.024: 98.8876% ( 1) 00:13:03.796 16.024 - 16.119: 98.9262% ( 5) 00:13:03.796 16.213 - 16.308: 98.9571% ( 4) 00:13:03.796 16.308 - 16.403: 98.9649% ( 1) 00:13:03.796 16.403 - 16.498: 99.0112% ( 6) 00:13:03.796 16.498 - 16.593: 99.1039% ( 12) 00:13:03.796 16.593 - 16.687: 99.1503% ( 6) 00:13:03.796 16.687 - 16.782: 99.2043% ( 7) 00:13:03.796 16.782 - 16.877: 99.2430% ( 5) 00:13:03.796 16.877 - 16.972: 99.2739% ( 4) 00:13:03.796 16.972 - 17.067: 99.2970% ( 3) 00:13:03.796 17.067 - 17.161: 99.3202% ( 3) 00:13:03.796 17.161 - 17.256: 99.3357% ( 2) 00:13:03.796 17.446 - 17.541: 99.3434% ( 1) 00:13:03.796 17.541 - 17.636: 99.3588% ( 2) 00:13:03.796 17.920 - 18.015: 99.3666% ( 1) 00:13:03.796 100.883 - 101.641: 99.3743% ( 1) 00:13:03.796 3737.979 - 3762.252: 99.3820% ( 1) 00:13:03.796 3980.705 - 4004.978: 99.8996% ( 67) 00:13:03.796 4004.978 - 4029.250: 99.9846% ( 11) 00:13:03.796 4077.796 - 4102.068: 99.9923% ( 1) 00:13:03.796 6990.507 - 7039.052: 100.0000% ( 1) 00:13:03.796 00:13:03.796 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:03.796 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:03.796 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:03.796 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:03.796 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.053 [ 00:13:04.053 { 00:13:04.053 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.053 "subtype": "Discovery", 00:13:04.053 "listen_addresses": [], 00:13:04.053 "allow_any_host": true, 00:13:04.053 "hosts": [] 00:13:04.053 }, 00:13:04.053 { 00:13:04.053 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.053 "subtype": "NVMe", 00:13:04.053 "listen_addresses": [ 00:13:04.053 { 00:13:04.053 "trtype": "VFIOUSER", 00:13:04.053 "adrfam": "IPv4", 00:13:04.053 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.053 "trsvcid": "0" 00:13:04.053 } 00:13:04.053 ], 00:13:04.053 "allow_any_host": true, 00:13:04.053 "hosts": [], 00:13:04.053 "serial_number": "SPDK1", 00:13:04.053 "model_number": "SPDK bdev Controller", 00:13:04.053 "max_namespaces": 32, 00:13:04.053 "min_cntlid": 1, 00:13:04.053 "max_cntlid": 65519, 00:13:04.053 "namespaces": [ 00:13:04.053 { 00:13:04.053 "nsid": 1, 00:13:04.053 "bdev_name": "Malloc1", 00:13:04.053 "name": "Malloc1", 00:13:04.053 "nguid": "E78EE0A5BE304F9DAF21DFB61EC04463", 00:13:04.053 "uuid": "e78ee0a5-be30-4f9d-af21-dfb61ec04463" 00:13:04.053 } 00:13:04.053 ] 00:13:04.053 }, 00:13:04.053 { 00:13:04.053 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.053 "subtype": "NVMe", 00:13:04.053 "listen_addresses": [ 00:13:04.053 { 00:13:04.053 "trtype": "VFIOUSER", 00:13:04.053 "adrfam": "IPv4", 00:13:04.053 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.053 "trsvcid": "0" 00:13:04.053 } 00:13:04.053 ], 00:13:04.053 "allow_any_host": true, 00:13:04.053 "hosts": [], 00:13:04.053 "serial_number": "SPDK2", 00:13:04.053 "model_number": "SPDK bdev Controller", 00:13:04.053 "max_namespaces": 32, 00:13:04.053 "min_cntlid": 1, 00:13:04.053 "max_cntlid": 65519, 00:13:04.053 "namespaces": [ 00:13:04.053 { 00:13:04.053 "nsid": 1, 00:13:04.053 "bdev_name": "Malloc2", 00:13:04.053 "name": "Malloc2", 00:13:04.053 "nguid": "46BD884134F2480D81AC24E391775428", 00:13:04.053 "uuid": "46bd8841-34f2-480d-81ac-24e391775428" 00:13:04.053 } 00:13:04.053 ] 00:13:04.053 } 00:13:04.053 ] 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=897027 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:04.053 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:04.053 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.310 [2024-07-25 14:14:33.730614] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:04.310 Malloc3 00:13:04.310 14:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:04.568 [2024-07-25 14:14:34.108267] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:04.568 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:04.568 Asynchronous Event Request test 00:13:04.568 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.568 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:04.568 Registering asynchronous event callbacks... 00:13:04.568 Starting namespace attribute notice tests for all controllers... 00:13:04.568 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:04.568 aer_cb - Changed Namespace 00:13:04.568 Cleaning up... 00:13:04.826 [ 00:13:04.826 { 00:13:04.826 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:04.826 "subtype": "Discovery", 00:13:04.826 "listen_addresses": [], 00:13:04.826 "allow_any_host": true, 00:13:04.826 "hosts": [] 00:13:04.826 }, 00:13:04.826 { 00:13:04.826 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:04.826 "subtype": "NVMe", 00:13:04.826 "listen_addresses": [ 00:13:04.826 { 00:13:04.826 "trtype": "VFIOUSER", 00:13:04.826 "adrfam": "IPv4", 00:13:04.827 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:04.827 "trsvcid": "0" 00:13:04.827 } 00:13:04.827 ], 00:13:04.827 "allow_any_host": true, 00:13:04.827 "hosts": [], 00:13:04.827 "serial_number": "SPDK1", 00:13:04.827 "model_number": "SPDK bdev Controller", 00:13:04.827 "max_namespaces": 32, 00:13:04.827 "min_cntlid": 1, 00:13:04.827 "max_cntlid": 65519, 00:13:04.827 "namespaces": [ 00:13:04.827 { 00:13:04.827 "nsid": 1, 00:13:04.827 "bdev_name": "Malloc1", 00:13:04.827 "name": "Malloc1", 00:13:04.827 "nguid": "E78EE0A5BE304F9DAF21DFB61EC04463", 00:13:04.827 "uuid": "e78ee0a5-be30-4f9d-af21-dfb61ec04463" 00:13:04.827 }, 00:13:04.827 { 00:13:04.827 "nsid": 2, 00:13:04.827 "bdev_name": "Malloc3", 00:13:04.827 "name": "Malloc3", 00:13:04.827 "nguid": "2B8AE27FB3FD4FDF89F5F3064E5B971A", 00:13:04.827 "uuid": "2b8ae27f-b3fd-4fdf-89f5-f3064e5b971a" 00:13:04.827 } 00:13:04.827 ] 00:13:04.827 }, 00:13:04.827 { 00:13:04.827 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:04.827 "subtype": "NVMe", 00:13:04.827 "listen_addresses": [ 00:13:04.827 { 00:13:04.827 "trtype": "VFIOUSER", 00:13:04.827 "adrfam": "IPv4", 00:13:04.827 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:04.827 "trsvcid": "0" 00:13:04.827 } 00:13:04.827 ], 00:13:04.827 "allow_any_host": true, 00:13:04.827 "hosts": [], 00:13:04.827 "serial_number": "SPDK2", 00:13:04.827 "model_number": "SPDK bdev Controller", 00:13:04.827 "max_namespaces": 32, 00:13:04.827 "min_cntlid": 1, 00:13:04.827 "max_cntlid": 65519, 00:13:04.827 "namespaces": [ 00:13:04.827 { 00:13:04.827 "nsid": 1, 00:13:04.827 "bdev_name": "Malloc2", 00:13:04.827 "name": "Malloc2", 00:13:04.827 "nguid": "46BD884134F2480D81AC24E391775428", 00:13:04.827 "uuid": "46bd8841-34f2-480d-81ac-24e391775428" 00:13:04.827 } 00:13:04.827 ] 00:13:04.827 } 00:13:04.827 ] 00:13:04.827 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 897027 00:13:04.827 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:04.827 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:04.827 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:04.827 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:04.827 [2024-07-25 14:14:34.406018] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:13:04.827 [2024-07-25 14:14:34.406079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid897153 ] 00:13:04.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.827 [2024-07-25 14:14:34.441228] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:04.827 [2024-07-25 14:14:34.443549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.827 [2024-07-25 14:14:34.443578] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f0230a2d000 00:13:04.827 [2024-07-25 14:14:34.444550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.445557] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.446559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.447559] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.448566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.449572] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.450585] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.451591] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:04.827 [2024-07-25 14:14:34.452597] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:04.827 [2024-07-25 14:14:34.452618] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f0230a22000 00:13:04.827 [2024-07-25 14:14:34.453766] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:04.827 [2024-07-25 14:14:34.467749] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:04.827 [2024-07-25 14:14:34.467784] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:04.827 [2024-07-25 14:14:34.472903] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.827 [2024-07-25 14:14:34.472961] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:04.827 [2024-07-25 14:14:34.473051] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:04.827 [2024-07-25 14:14:34.473095] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:04.827 [2024-07-25 14:14:34.473108] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:04.827 [2024-07-25 14:14:34.473905] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:04.827 [2024-07-25 14:14:34.473931] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:04.827 [2024-07-25 14:14:34.473945] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:04.827 [2024-07-25 14:14:34.474910] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:04.827 [2024-07-25 14:14:34.474930] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:04.827 [2024-07-25 14:14:34.474945] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:04.827 [2024-07-25 14:14:34.475918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:04.827 [2024-07-25 14:14:34.475939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:04.827 [2024-07-25 14:14:34.476934] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:04.827 [2024-07-25 14:14:34.476956] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:04.827 [2024-07-25 14:14:34.476966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:04.827 [2024-07-25 14:14:34.476992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:04.827 [2024-07-25 14:14:34.477103] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:04.827 [2024-07-25 14:14:34.477114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:04.827 [2024-07-25 14:14:34.477122] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:04.827 [2024-07-25 14:14:34.477925] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:05.088 [2024-07-25 14:14:34.478935] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:05.088 [2024-07-25 14:14:34.479941] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:05.088 [2024-07-25 14:14:34.480938] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.088 [2024-07-25 14:14:34.481020] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:05.088 [2024-07-25 14:14:34.481960] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:05.088 [2024-07-25 14:14:34.481981] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:05.088 [2024-07-25 14:14:34.481990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:05.088 [2024-07-25 14:14:34.482013] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:05.088 [2024-07-25 14:14:34.482027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:05.088 [2024-07-25 14:14:34.482073] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:05.088 [2024-07-25 14:14:34.482085] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:05.088 [2024-07-25 14:14:34.482092] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.088 [2024-07-25 14:14:34.482127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.490072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.490096] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:05.089 [2024-07-25 14:14:34.490105] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:05.089 [2024-07-25 14:14:34.490113] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:05.089 [2024-07-25 14:14:34.490121] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:05.089 [2024-07-25 14:14:34.490130] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:05.089 [2024-07-25 14:14:34.490138] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:05.089 [2024-07-25 14:14:34.490146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.490161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.490182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.498070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.498098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.089 [2024-07-25 14:14:34.498113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.089 [2024-07-25 14:14:34.498125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.089 [2024-07-25 14:14:34.498137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:05.089 [2024-07-25 14:14:34.498146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.498163] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.498178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.506068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.506086] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:05.089 [2024-07-25 14:14:34.506096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.506116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.506128] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.506142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.514085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.514160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.514177] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.514191] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:05.089 [2024-07-25 14:14:34.514199] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:05.089 [2024-07-25 14:14:34.514205] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.089 [2024-07-25 14:14:34.514215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.522071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.522094] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:05.089 [2024-07-25 14:14:34.522114] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.522130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.522143] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:05.089 [2024-07-25 14:14:34.522151] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:05.089 [2024-07-25 14:14:34.522157] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.089 [2024-07-25 14:14:34.522166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.530074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.530101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.530117] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.530131] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:05.089 [2024-07-25 14:14:34.530139] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:05.089 [2024-07-25 14:14:34.530145] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.089 [2024-07-25 14:14:34.530154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.538086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.538107] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538124] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538174] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538183] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:05.089 [2024-07-25 14:14:34.538190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:05.089 [2024-07-25 14:14:34.538199] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:05.089 [2024-07-25 14:14:34.538225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.546071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.546097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.554069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.554093] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.562071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.562096] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:05.089 [2024-07-25 14:14:34.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:05.089 [2024-07-25 14:14:34.570101] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:05.089 [2024-07-25 14:14:34.570112] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:05.089 [2024-07-25 14:14:34.570119] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:05.089 [2024-07-25 14:14:34.570125] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:05.089 [2024-07-25 14:14:34.570131] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:05.089 [2024-07-25 14:14:34.570140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:05.089 [2024-07-25 14:14:34.570152] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:05.089 [2024-07-25 14:14:34.570160] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:05.089 [2024-07-25 14:14:34.570166] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.090 [2024-07-25 14:14:34.570175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:05.090 [2024-07-25 14:14:34.570190] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:05.090 [2024-07-25 14:14:34.570199] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:05.090 [2024-07-25 14:14:34.570204] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.090 [2024-07-25 14:14:34.570213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:05.090 [2024-07-25 14:14:34.570225] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:05.090 [2024-07-25 14:14:34.570233] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:05.090 [2024-07-25 14:14:34.570239] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:05.090 [2024-07-25 14:14:34.570248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:05.090 [2024-07-25 14:14:34.578072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:05.090 [2024-07-25 14:14:34.578099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:05.090 [2024-07-25 14:14:34.578116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:05.090 [2024-07-25 14:14:34.578128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:05.090 ===================================================== 00:13:05.090 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.090 ===================================================== 00:13:05.090 Controller Capabilities/Features 00:13:05.090 ================================ 00:13:05.090 Vendor ID: 4e58 00:13:05.090 Subsystem Vendor ID: 4e58 00:13:05.090 Serial Number: SPDK2 00:13:05.090 Model Number: SPDK bdev Controller 00:13:05.090 Firmware Version: 24.09 00:13:05.090 Recommended Arb Burst: 6 00:13:05.090 IEEE OUI Identifier: 8d 6b 50 00:13:05.090 Multi-path I/O 00:13:05.090 May have multiple subsystem ports: Yes 00:13:05.090 May have multiple controllers: Yes 00:13:05.090 Associated with SR-IOV VF: No 00:13:05.090 Max Data Transfer Size: 131072 00:13:05.090 Max Number of Namespaces: 32 00:13:05.090 Max Number of I/O Queues: 127 00:13:05.090 NVMe Specification Version (VS): 1.3 00:13:05.090 NVMe Specification Version (Identify): 1.3 00:13:05.090 Maximum Queue Entries: 256 00:13:05.090 Contiguous Queues Required: Yes 00:13:05.090 Arbitration Mechanisms Supported 00:13:05.090 Weighted Round Robin: Not Supported 00:13:05.090 Vendor Specific: Not Supported 00:13:05.090 Reset Timeout: 15000 ms 00:13:05.090 Doorbell Stride: 4 bytes 00:13:05.090 NVM Subsystem Reset: Not Supported 00:13:05.090 Command Sets Supported 00:13:05.090 NVM Command Set: Supported 00:13:05.090 Boot Partition: Not Supported 00:13:05.090 Memory Page Size Minimum: 4096 bytes 00:13:05.090 Memory Page Size Maximum: 4096 bytes 00:13:05.090 Persistent Memory Region: Not Supported 00:13:05.090 Optional Asynchronous Events Supported 00:13:05.090 Namespace Attribute Notices: Supported 00:13:05.090 Firmware Activation Notices: Not Supported 00:13:05.090 ANA Change Notices: Not Supported 00:13:05.090 PLE Aggregate Log Change Notices: Not Supported 00:13:05.090 LBA Status Info Alert Notices: Not Supported 00:13:05.090 EGE Aggregate Log Change Notices: Not Supported 00:13:05.090 Normal NVM Subsystem Shutdown event: Not Supported 00:13:05.090 Zone Descriptor Change Notices: Not Supported 00:13:05.090 Discovery Log Change Notices: Not Supported 00:13:05.090 Controller Attributes 00:13:05.090 128-bit Host Identifier: Supported 00:13:05.090 Non-Operational Permissive Mode: Not Supported 00:13:05.090 NVM Sets: Not Supported 00:13:05.090 Read Recovery Levels: Not Supported 00:13:05.090 Endurance Groups: Not Supported 00:13:05.090 Predictable Latency Mode: Not Supported 00:13:05.090 Traffic Based Keep ALive: Not Supported 00:13:05.090 Namespace Granularity: Not Supported 00:13:05.090 SQ Associations: Not Supported 00:13:05.090 UUID List: Not Supported 00:13:05.090 Multi-Domain Subsystem: Not Supported 00:13:05.090 Fixed Capacity Management: Not Supported 00:13:05.090 Variable Capacity Management: Not Supported 00:13:05.090 Delete Endurance Group: Not Supported 00:13:05.090 Delete NVM Set: Not Supported 00:13:05.090 Extended LBA Formats Supported: Not Supported 00:13:05.090 Flexible Data Placement Supported: Not Supported 00:13:05.090 00:13:05.090 Controller Memory Buffer Support 00:13:05.090 ================================ 00:13:05.090 Supported: No 00:13:05.090 00:13:05.090 Persistent Memory Region Support 00:13:05.090 ================================ 00:13:05.090 Supported: No 00:13:05.090 00:13:05.090 Admin Command Set Attributes 00:13:05.090 ============================ 00:13:05.090 Security Send/Receive: Not Supported 00:13:05.090 Format NVM: Not Supported 00:13:05.090 Firmware Activate/Download: Not Supported 00:13:05.090 Namespace Management: Not Supported 00:13:05.090 Device Self-Test: Not Supported 00:13:05.090 Directives: Not Supported 00:13:05.090 NVMe-MI: Not Supported 00:13:05.090 Virtualization Management: Not Supported 00:13:05.090 Doorbell Buffer Config: Not Supported 00:13:05.090 Get LBA Status Capability: Not Supported 00:13:05.090 Command & Feature Lockdown Capability: Not Supported 00:13:05.090 Abort Command Limit: 4 00:13:05.090 Async Event Request Limit: 4 00:13:05.090 Number of Firmware Slots: N/A 00:13:05.090 Firmware Slot 1 Read-Only: N/A 00:13:05.090 Firmware Activation Without Reset: N/A 00:13:05.090 Multiple Update Detection Support: N/A 00:13:05.090 Firmware Update Granularity: No Information Provided 00:13:05.090 Per-Namespace SMART Log: No 00:13:05.090 Asymmetric Namespace Access Log Page: Not Supported 00:13:05.090 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:05.090 Command Effects Log Page: Supported 00:13:05.090 Get Log Page Extended Data: Supported 00:13:05.090 Telemetry Log Pages: Not Supported 00:13:05.090 Persistent Event Log Pages: Not Supported 00:13:05.090 Supported Log Pages Log Page: May Support 00:13:05.090 Commands Supported & Effects Log Page: Not Supported 00:13:05.090 Feature Identifiers & Effects Log Page:May Support 00:13:05.090 NVMe-MI Commands & Effects Log Page: May Support 00:13:05.090 Data Area 4 for Telemetry Log: Not Supported 00:13:05.090 Error Log Page Entries Supported: 128 00:13:05.090 Keep Alive: Supported 00:13:05.090 Keep Alive Granularity: 10000 ms 00:13:05.090 00:13:05.090 NVM Command Set Attributes 00:13:05.090 ========================== 00:13:05.090 Submission Queue Entry Size 00:13:05.090 Max: 64 00:13:05.090 Min: 64 00:13:05.090 Completion Queue Entry Size 00:13:05.091 Max: 16 00:13:05.091 Min: 16 00:13:05.091 Number of Namespaces: 32 00:13:05.091 Compare Command: Supported 00:13:05.091 Write Uncorrectable Command: Not Supported 00:13:05.091 Dataset Management Command: Supported 00:13:05.091 Write Zeroes Command: Supported 00:13:05.091 Set Features Save Field: Not Supported 00:13:05.091 Reservations: Not Supported 00:13:05.091 Timestamp: Not Supported 00:13:05.091 Copy: Supported 00:13:05.091 Volatile Write Cache: Present 00:13:05.091 Atomic Write Unit (Normal): 1 00:13:05.091 Atomic Write Unit (PFail): 1 00:13:05.091 Atomic Compare & Write Unit: 1 00:13:05.091 Fused Compare & Write: Supported 00:13:05.091 Scatter-Gather List 00:13:05.091 SGL Command Set: Supported (Dword aligned) 00:13:05.091 SGL Keyed: Not Supported 00:13:05.091 SGL Bit Bucket Descriptor: Not Supported 00:13:05.091 SGL Metadata Pointer: Not Supported 00:13:05.091 Oversized SGL: Not Supported 00:13:05.091 SGL Metadata Address: Not Supported 00:13:05.091 SGL Offset: Not Supported 00:13:05.091 Transport SGL Data Block: Not Supported 00:13:05.091 Replay Protected Memory Block: Not Supported 00:13:05.091 00:13:05.091 Firmware Slot Information 00:13:05.091 ========================= 00:13:05.091 Active slot: 1 00:13:05.091 Slot 1 Firmware Revision: 24.09 00:13:05.091 00:13:05.091 00:13:05.091 Commands Supported and Effects 00:13:05.091 ============================== 00:13:05.091 Admin Commands 00:13:05.091 -------------- 00:13:05.091 Get Log Page (02h): Supported 00:13:05.091 Identify (06h): Supported 00:13:05.091 Abort (08h): Supported 00:13:05.091 Set Features (09h): Supported 00:13:05.091 Get Features (0Ah): Supported 00:13:05.091 Asynchronous Event Request (0Ch): Supported 00:13:05.091 Keep Alive (18h): Supported 00:13:05.091 I/O Commands 00:13:05.091 ------------ 00:13:05.091 Flush (00h): Supported LBA-Change 00:13:05.091 Write (01h): Supported LBA-Change 00:13:05.091 Read (02h): Supported 00:13:05.091 Compare (05h): Supported 00:13:05.091 Write Zeroes (08h): Supported LBA-Change 00:13:05.091 Dataset Management (09h): Supported LBA-Change 00:13:05.091 Copy (19h): Supported LBA-Change 00:13:05.091 00:13:05.091 Error Log 00:13:05.091 ========= 00:13:05.091 00:13:05.091 Arbitration 00:13:05.091 =========== 00:13:05.091 Arbitration Burst: 1 00:13:05.091 00:13:05.091 Power Management 00:13:05.091 ================ 00:13:05.091 Number of Power States: 1 00:13:05.091 Current Power State: Power State #0 00:13:05.091 Power State #0: 00:13:05.091 Max Power: 0.00 W 00:13:05.091 Non-Operational State: Operational 00:13:05.091 Entry Latency: Not Reported 00:13:05.091 Exit Latency: Not Reported 00:13:05.091 Relative Read Throughput: 0 00:13:05.091 Relative Read Latency: 0 00:13:05.091 Relative Write Throughput: 0 00:13:05.091 Relative Write Latency: 0 00:13:05.091 Idle Power: Not Reported 00:13:05.091 Active Power: Not Reported 00:13:05.091 Non-Operational Permissive Mode: Not Supported 00:13:05.091 00:13:05.091 Health Information 00:13:05.091 ================== 00:13:05.091 Critical Warnings: 00:13:05.091 Available Spare Space: OK 00:13:05.091 Temperature: OK 00:13:05.091 Device Reliability: OK 00:13:05.091 Read Only: No 00:13:05.091 Volatile Memory Backup: OK 00:13:05.091 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:05.091 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:05.091 Available Spare: 0% 00:13:05.091 Available Sp[2024-07-25 14:14:34.578250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:05.091 [2024-07-25 14:14:34.586068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:05.091 [2024-07-25 14:14:34.586118] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:05.091 [2024-07-25 14:14:34.586136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.091 [2024-07-25 14:14:34.586147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.091 [2024-07-25 14:14:34.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.091 [2024-07-25 14:14:34.586167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:05.091 [2024-07-25 14:14:34.586247] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:05.091 [2024-07-25 14:14:34.586269] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:05.091 [2024-07-25 14:14:34.587252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.091 [2024-07-25 14:14:34.587338] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:05.091 [2024-07-25 14:14:34.587368] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:05.091 [2024-07-25 14:14:34.588257] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:05.091 [2024-07-25 14:14:34.588282] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:05.091 [2024-07-25 14:14:34.588336] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:05.091 [2024-07-25 14:14:34.589541] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:05.091 are Threshold: 0% 00:13:05.091 Life Percentage Used: 0% 00:13:05.091 Data Units Read: 0 00:13:05.091 Data Units Written: 0 00:13:05.091 Host Read Commands: 0 00:13:05.091 Host Write Commands: 0 00:13:05.091 Controller Busy Time: 0 minutes 00:13:05.091 Power Cycles: 0 00:13:05.091 Power On Hours: 0 hours 00:13:05.091 Unsafe Shutdowns: 0 00:13:05.091 Unrecoverable Media Errors: 0 00:13:05.091 Lifetime Error Log Entries: 0 00:13:05.091 Warning Temperature Time: 0 minutes 00:13:05.091 Critical Temperature Time: 0 minutes 00:13:05.091 00:13:05.091 Number of Queues 00:13:05.091 ================ 00:13:05.091 Number of I/O Submission Queues: 127 00:13:05.091 Number of I/O Completion Queues: 127 00:13:05.091 00:13:05.091 Active Namespaces 00:13:05.091 ================= 00:13:05.091 Namespace ID:1 00:13:05.091 Error Recovery Timeout: Unlimited 00:13:05.091 Command Set Identifier: NVM (00h) 00:13:05.091 Deallocate: Supported 00:13:05.091 Deallocated/Unwritten Error: Not Supported 00:13:05.091 Deallocated Read Value: Unknown 00:13:05.091 Deallocate in Write Zeroes: Not Supported 00:13:05.091 Deallocated Guard Field: 0xFFFF 00:13:05.091 Flush: Supported 00:13:05.091 Reservation: Supported 00:13:05.091 Namespace Sharing Capabilities: Multiple Controllers 00:13:05.091 Size (in LBAs): 131072 (0GiB) 00:13:05.091 Capacity (in LBAs): 131072 (0GiB) 00:13:05.091 Utilization (in LBAs): 131072 (0GiB) 00:13:05.091 NGUID: 46BD884134F2480D81AC24E391775428 00:13:05.091 UUID: 46bd8841-34f2-480d-81ac-24e391775428 00:13:05.091 Thin Provisioning: Not Supported 00:13:05.091 Per-NS Atomic Units: Yes 00:13:05.091 Atomic Boundary Size (Normal): 0 00:13:05.091 Atomic Boundary Size (PFail): 0 00:13:05.091 Atomic Boundary Offset: 0 00:13:05.091 Maximum Single Source Range Length: 65535 00:13:05.091 Maximum Copy Length: 65535 00:13:05.091 Maximum Source Range Count: 1 00:13:05.091 NGUID/EUI64 Never Reused: No 00:13:05.091 Namespace Write Protected: No 00:13:05.091 Number of LBA Formats: 1 00:13:05.091 Current LBA Format: LBA Format #00 00:13:05.091 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:05.091 00:13:05.092 14:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:05.092 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.352 [2024-07-25 14:14:34.817882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:10.658 Initializing NVMe Controllers 00:13:10.658 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:10.658 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:10.658 Initialization complete. Launching workers. 00:13:10.658 ======================================================== 00:13:10.658 Latency(us) 00:13:10.658 Device Information : IOPS MiB/s Average min max 00:13:10.658 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33555.86 131.08 3813.69 1195.64 9274.22 00:13:10.658 ======================================================== 00:13:10.658 Total : 33555.86 131.08 3813.69 1195.64 9274.22 00:13:10.658 00:13:10.658 [2024-07-25 14:14:39.918426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:10.658 14:14:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:10.658 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.658 [2024-07-25 14:14:40.162132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.931 Initializing NVMe Controllers 00:13:15.931 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.931 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:15.931 Initialization complete. Launching workers. 00:13:15.931 ======================================================== 00:13:15.931 Latency(us) 00:13:15.931 Device Information : IOPS MiB/s Average min max 00:13:15.931 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31029.09 121.21 4124.44 1246.85 7539.61 00:13:15.931 ======================================================== 00:13:15.931 Total : 31029.09 121.21 4124.44 1246.85 7539.61 00:13:15.931 00:13:15.931 [2024-07-25 14:14:45.182016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.931 14:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:15.931 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.931 [2024-07-25 14:14:45.406950] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:21.206 [2024-07-25 14:14:50.544193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:21.207 Initializing NVMe Controllers 00:13:21.207 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.207 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:21.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:21.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:21.207 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:21.207 Initialization complete. Launching workers. 00:13:21.207 Starting thread on core 2 00:13:21.207 Starting thread on core 3 00:13:21.207 Starting thread on core 1 00:13:21.207 14:14:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:21.207 EAL: No free 2048 kB hugepages reported on node 1 00:13:21.207 [2024-07-25 14:14:50.842543] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.501 [2024-07-25 14:14:53.904245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.501 Initializing NVMe Controllers 00:13:24.501 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.501 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.501 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:24.501 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:24.501 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:24.501 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:24.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:24.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:24.501 Initialization complete. Launching workers. 00:13:24.501 Starting thread on core 1 with urgent priority queue 00:13:24.501 Starting thread on core 2 with urgent priority queue 00:13:24.501 Starting thread on core 3 with urgent priority queue 00:13:24.501 Starting thread on core 0 with urgent priority queue 00:13:24.501 SPDK bdev Controller (SPDK2 ) core 0: 3848.67 IO/s 25.98 secs/100000 ios 00:13:24.501 SPDK bdev Controller (SPDK2 ) core 1: 3405.33 IO/s 29.37 secs/100000 ios 00:13:24.501 SPDK bdev Controller (SPDK2 ) core 2: 3603.00 IO/s 27.75 secs/100000 ios 00:13:24.501 SPDK bdev Controller (SPDK2 ) core 3: 3828.33 IO/s 26.12 secs/100000 ios 00:13:24.501 ======================================================== 00:13:24.501 00:13:24.501 14:14:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:24.501 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.760 [2024-07-25 14:14:54.207602] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:24.760 Initializing NVMe Controllers 00:13:24.760 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.760 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:24.760 Namespace ID: 1 size: 0GB 00:13:24.760 Initialization complete. 00:13:24.760 INFO: using host memory buffer for IO 00:13:24.760 Hello world! 00:13:24.760 [2024-07-25 14:14:54.216660] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:24.760 14:14:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:24.760 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.018 [2024-07-25 14:14:54.517387] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:25.953 Initializing NVMe Controllers 00:13:25.953 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.953 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:25.953 Initialization complete. Launching workers. 00:13:25.953 submit (in ns) avg, min, max = 9153.8, 3492.2, 4015783.3 00:13:25.953 complete (in ns) avg, min, max = 26484.5, 2054.4, 4997584.4 00:13:25.953 00:13:25.953 Submit histogram 00:13:25.953 ================ 00:13:25.953 Range in us Cumulative Count 00:13:25.953 3.484 - 3.508: 0.5573% ( 72) 00:13:25.953 3.508 - 3.532: 2.0512% ( 193) 00:13:25.953 3.532 - 3.556: 4.9385% ( 373) 00:13:25.953 3.556 - 3.579: 12.8648% ( 1024) 00:13:25.953 3.579 - 3.603: 22.4708% ( 1241) 00:13:25.953 3.603 - 3.627: 32.6418% ( 1314) 00:13:25.953 3.627 - 3.650: 41.5125% ( 1146) 00:13:25.953 3.650 - 3.674: 48.6106% ( 917) 00:13:25.953 3.674 - 3.698: 53.7116% ( 659) 00:13:25.953 3.698 - 3.721: 58.3946% ( 605) 00:13:25.953 3.721 - 3.745: 61.5682% ( 410) 00:13:25.953 3.745 - 3.769: 64.9973% ( 443) 00:13:25.953 3.769 - 3.793: 68.2793% ( 424) 00:13:25.953 3.793 - 3.816: 71.9947% ( 480) 00:13:25.953 3.816 - 3.840: 76.1359% ( 535) 00:13:25.953 3.840 - 3.864: 80.6641% ( 585) 00:13:25.953 3.864 - 3.887: 84.1629% ( 452) 00:13:25.953 3.887 - 3.911: 86.5160% ( 304) 00:13:25.953 3.911 - 3.935: 88.3815% ( 241) 00:13:25.953 3.935 - 3.959: 89.8289% ( 187) 00:13:25.954 3.959 - 3.982: 91.0442% ( 157) 00:13:25.954 3.982 - 4.006: 92.2672% ( 158) 00:13:25.954 4.006 - 4.030: 93.1419% ( 113) 00:13:25.954 4.030 - 4.053: 94.2488% ( 143) 00:13:25.954 4.053 - 4.077: 95.1157% ( 112) 00:13:25.954 4.077 - 4.101: 95.6111% ( 64) 00:13:25.954 4.101 - 4.124: 96.1375% ( 68) 00:13:25.954 4.124 - 4.148: 96.4239% ( 37) 00:13:25.954 4.148 - 4.172: 96.5864% ( 21) 00:13:25.954 4.172 - 4.196: 96.7412% ( 20) 00:13:25.954 4.196 - 4.219: 96.9270% ( 24) 00:13:25.954 4.219 - 4.243: 97.0586% ( 17) 00:13:25.954 4.243 - 4.267: 97.1515% ( 12) 00:13:25.954 4.267 - 4.290: 97.2676% ( 15) 00:13:25.954 4.290 - 4.314: 97.3682% ( 13) 00:13:25.954 4.314 - 4.338: 97.3992% ( 4) 00:13:25.954 4.338 - 4.361: 97.4843% ( 11) 00:13:25.954 4.361 - 4.385: 97.5153% ( 4) 00:13:25.954 4.385 - 4.409: 97.5695% ( 7) 00:13:25.954 4.409 - 4.433: 97.5772% ( 1) 00:13:25.954 4.433 - 4.456: 97.5927% ( 2) 00:13:25.954 4.456 - 4.480: 97.6082% ( 2) 00:13:25.954 4.527 - 4.551: 97.6237% ( 2) 00:13:25.954 4.670 - 4.693: 97.6314% ( 1) 00:13:25.954 4.741 - 4.764: 97.6391% ( 1) 00:13:25.954 4.764 - 4.788: 97.6624% ( 3) 00:13:25.954 4.812 - 4.836: 97.7088% ( 6) 00:13:25.954 4.836 - 4.859: 97.7785% ( 9) 00:13:25.954 4.859 - 4.883: 97.8326% ( 7) 00:13:25.954 4.883 - 4.907: 97.8868% ( 7) 00:13:25.954 4.907 - 4.930: 97.9410% ( 7) 00:13:25.954 4.930 - 4.954: 97.9797% ( 5) 00:13:25.954 4.954 - 4.978: 98.0494% ( 9) 00:13:25.954 4.978 - 5.001: 98.0649% ( 2) 00:13:25.954 5.001 - 5.025: 98.1268% ( 8) 00:13:25.954 5.025 - 5.049: 98.1423% ( 2) 00:13:25.954 5.049 - 5.073: 98.1732% ( 4) 00:13:25.954 5.073 - 5.096: 98.2042% ( 4) 00:13:25.954 5.096 - 5.120: 98.2506% ( 6) 00:13:25.954 5.120 - 5.144: 98.2893% ( 5) 00:13:25.954 5.144 - 5.167: 98.3280% ( 5) 00:13:25.954 5.167 - 5.191: 98.3590% ( 4) 00:13:25.954 5.191 - 5.215: 98.3667% ( 1) 00:13:25.954 5.215 - 5.239: 98.3900% ( 3) 00:13:25.954 5.239 - 5.262: 98.3977% ( 1) 00:13:25.954 5.262 - 5.286: 98.4054% ( 1) 00:13:25.954 5.286 - 5.310: 98.4132% ( 1) 00:13:25.954 5.357 - 5.381: 98.4209% ( 1) 00:13:25.954 5.594 - 5.618: 98.4287% ( 1) 00:13:25.954 5.665 - 5.689: 98.4442% ( 2) 00:13:25.954 5.807 - 5.831: 98.4519% ( 1) 00:13:25.954 6.163 - 6.210: 98.4596% ( 1) 00:13:25.954 6.210 - 6.258: 98.4674% ( 1) 00:13:25.954 6.258 - 6.305: 98.4751% ( 1) 00:13:25.954 6.305 - 6.353: 98.4829% ( 1) 00:13:25.954 6.400 - 6.447: 98.4906% ( 1) 00:13:25.954 6.542 - 6.590: 98.4983% ( 1) 00:13:25.954 6.684 - 6.732: 98.5061% ( 1) 00:13:25.954 6.779 - 6.827: 98.5293% ( 3) 00:13:26.213 6.827 - 6.874: 98.5370% ( 1) 00:13:26.213 6.874 - 6.921: 98.5448% ( 1) 00:13:26.213 7.016 - 7.064: 98.5525% ( 1) 00:13:26.213 7.206 - 7.253: 98.5680% ( 2) 00:13:26.213 7.253 - 7.301: 98.5835% ( 2) 00:13:26.213 7.348 - 7.396: 98.5990% ( 2) 00:13:26.213 7.396 - 7.443: 98.6222% ( 3) 00:13:26.213 7.443 - 7.490: 98.6377% ( 2) 00:13:26.213 7.538 - 7.585: 98.6454% ( 1) 00:13:26.213 7.585 - 7.633: 98.6531% ( 1) 00:13:26.213 7.680 - 7.727: 98.6686% ( 2) 00:13:26.213 7.727 - 7.775: 98.6996% ( 4) 00:13:26.213 7.775 - 7.822: 98.7073% ( 1) 00:13:26.213 7.870 - 7.917: 98.7151% ( 1) 00:13:26.213 7.917 - 7.964: 98.7228% ( 1) 00:13:26.213 7.964 - 8.012: 98.7383% ( 2) 00:13:26.213 8.012 - 8.059: 98.7460% ( 1) 00:13:26.213 8.059 - 8.107: 98.7538% ( 1) 00:13:26.213 8.107 - 8.154: 98.7693% ( 2) 00:13:26.213 8.201 - 8.249: 98.7770% ( 1) 00:13:26.213 8.533 - 8.581: 98.7847% ( 1) 00:13:26.213 8.818 - 8.865: 98.7925% ( 1) 00:13:26.213 8.913 - 8.960: 98.8002% ( 1) 00:13:26.213 9.387 - 9.434: 98.8080% ( 1) 00:13:26.213 9.529 - 9.576: 98.8234% ( 2) 00:13:26.213 9.576 - 9.624: 98.8312% ( 1) 00:13:26.213 9.671 - 9.719: 98.8467% ( 2) 00:13:26.213 9.908 - 9.956: 98.8621% ( 2) 00:13:26.213 9.956 - 10.003: 98.8776% ( 2) 00:13:26.213 10.003 - 10.050: 98.8854% ( 1) 00:13:26.213 10.145 - 10.193: 98.8931% ( 1) 00:13:26.213 10.667 - 10.714: 98.9008% ( 1) 00:13:26.213 11.093 - 11.141: 98.9163% ( 2) 00:13:26.213 11.425 - 11.473: 98.9241% ( 1) 00:13:26.213 11.662 - 11.710: 98.9318% ( 1) 00:13:26.213 12.136 - 12.231: 98.9395% ( 1) 00:13:26.213 12.231 - 12.326: 98.9473% ( 1) 00:13:26.213 12.516 - 12.610: 98.9550% ( 1) 00:13:26.213 12.610 - 12.705: 98.9628% ( 1) 00:13:26.213 13.179 - 13.274: 98.9782% ( 2) 00:13:26.213 13.274 - 13.369: 98.9937% ( 2) 00:13:26.213 13.559 - 13.653: 99.0092% ( 2) 00:13:26.213 13.748 - 13.843: 99.0170% ( 1) 00:13:26.213 14.222 - 14.317: 99.0324% ( 2) 00:13:26.213 14.601 - 14.696: 99.0402% ( 1) 00:13:26.213 14.981 - 15.076: 99.0557% ( 2) 00:13:26.213 16.972 - 17.067: 99.0711% ( 2) 00:13:26.213 17.256 - 17.351: 99.0789% ( 1) 00:13:26.213 17.351 - 17.446: 99.1176% ( 5) 00:13:26.213 17.446 - 17.541: 99.1408% ( 3) 00:13:26.213 17.541 - 17.636: 99.1795% ( 5) 00:13:26.213 17.636 - 17.730: 99.2569% ( 10) 00:13:26.213 17.730 - 17.825: 99.3188% ( 8) 00:13:26.213 17.825 - 17.920: 99.3343% ( 2) 00:13:26.213 17.920 - 18.015: 99.3808% ( 6) 00:13:26.213 18.015 - 18.110: 99.4272% ( 6) 00:13:26.213 18.110 - 18.204: 99.5046% ( 10) 00:13:26.213 18.204 - 18.299: 99.5356% ( 4) 00:13:26.213 18.299 - 18.394: 99.5433% ( 1) 00:13:26.213 18.394 - 18.489: 99.5975% ( 7) 00:13:26.213 18.489 - 18.584: 99.6517% ( 7) 00:13:26.213 18.584 - 18.679: 99.6749% ( 3) 00:13:26.213 18.679 - 18.773: 99.7213% ( 6) 00:13:26.213 18.868 - 18.963: 99.7368% ( 2) 00:13:26.213 18.963 - 19.058: 99.7600% ( 3) 00:13:26.213 19.058 - 19.153: 99.7755% ( 2) 00:13:26.213 19.153 - 19.247: 99.7833% ( 1) 00:13:26.214 19.721 - 19.816: 99.7910% ( 1) 00:13:26.214 20.006 - 20.101: 99.7987% ( 1) 00:13:26.214 20.575 - 20.670: 99.8065% ( 1) 00:13:26.214 20.670 - 20.764: 99.8142% ( 1) 00:13:26.214 21.049 - 21.144: 99.8220% ( 1) 00:13:26.214 21.239 - 21.333: 99.8297% ( 1) 00:13:26.214 21.713 - 21.807: 99.8374% ( 1) 00:13:26.214 22.566 - 22.661: 99.8452% ( 1) 00:13:26.214 24.841 - 25.031: 99.8529% ( 1) 00:13:26.214 25.410 - 25.600: 99.8607% ( 1) 00:13:26.214 36.219 - 36.409: 99.8684% ( 1) 00:13:26.214 3980.705 - 4004.978: 99.9613% ( 12) 00:13:26.214 4004.978 - 4029.250: 100.0000% ( 5) 00:13:26.214 00:13:26.214 Complete histogram 00:13:26.214 ================== 00:13:26.214 Range in us Cumulative Count 00:13:26.214 2.050 - 2.062: 1.3236% ( 171) 00:13:26.214 2.062 - 2.074: 32.7425% ( 4059) 00:13:26.214 2.074 - 2.086: 49.0982% ( 2113) 00:13:26.214 2.086 - 2.098: 51.3894% ( 296) 00:13:26.214 2.098 - 2.110: 59.6099% ( 1062) 00:13:26.214 2.110 - 2.121: 62.4197% ( 363) 00:13:26.214 2.121 - 2.133: 66.1274% ( 479) 00:13:26.214 2.133 - 2.145: 75.2845% ( 1183) 00:13:26.214 2.145 - 2.157: 78.0324% ( 355) 00:13:26.214 2.157 - 2.169: 79.6656% ( 211) 00:13:26.214 2.169 - 2.181: 82.3284% ( 344) 00:13:26.214 2.181 - 2.193: 83.2495% ( 119) 00:13:26.214 2.193 - 2.204: 84.4725% ( 158) 00:13:26.214 2.204 - 2.216: 88.3195% ( 497) 00:13:26.214 2.216 - 2.228: 91.2919% ( 384) 00:13:26.214 2.228 - 2.240: 92.4994% ( 156) 00:13:26.214 2.240 - 2.252: 93.4438% ( 122) 00:13:26.214 2.252 - 2.264: 93.8618% ( 54) 00:13:26.214 2.264 - 2.276: 93.9856% ( 16) 00:13:26.214 2.276 - 2.287: 94.3030% ( 41) 00:13:26.214 2.287 - 2.299: 94.9919% ( 89) 00:13:26.214 2.299 - 2.311: 95.3712% ( 49) 00:13:26.214 2.311 - 2.323: 95.5105% ( 18) 00:13:26.214 2.323 - 2.335: 95.6730% ( 21) 00:13:26.214 2.335 - 2.347: 95.8743% ( 26) 00:13:26.214 2.347 - 2.359: 96.1917% ( 41) 00:13:26.214 2.359 - 2.370: 96.6019% ( 53) 00:13:26.214 2.370 - 2.382: 97.0973% ( 64) 00:13:26.214 2.382 - 2.394: 97.3837% ( 37) 00:13:26.214 2.394 - 2.406: 97.5695% ( 24) 00:13:26.214 2.406 - 2.418: 97.6856% ( 15) 00:13:26.214 2.418 - 2.430: 97.8404% ( 20) 00:13:26.214 2.430 - 2.441: 97.9720% ( 17) 00:13:26.214 2.441 - 2.453: 98.0803% ( 14) 00:13:26.214 2.453 - 2.465: 98.1965% ( 15) 00:13:26.214 2.465 - 2.477: 98.2739% ( 10) 00:13:26.214 2.477 - 2.489: 98.3590% ( 11) 00:13:26.214 2.489 - 2.501: 98.4132% ( 7) 00:13:26.214 2.501 - 2.513: 98.4364% ( 3) 00:13:26.214 2.513 - 2.524: 98.4674% ( 4) 00:13:26.214 2.524 - 2.536: 98.4906% ( 3) 00:13:26.214 2.536 - 2.548: 98.4983% ( 1) 00:13:26.214 2.548 - 2.560: 98.5061% ( 1) 00:13:26.214 2.572 - 2.584: 98.5138% ( 1) 00:13:26.214 2.584 - 2.596: 98.5216% ( 1) 00:13:26.214 2.631 - 2.643: 98.5293% ( 1) 00:13:26.214 3.366 - 3.390: 98.5370% ( 1) 00:13:26.214 3.437 - 3.461: 98.5603% ( 3) 00:13:26.214 3.484 - 3.508: 98.5757% ( 2) 00:13:26.214 3.532 - 3.556: 98.5912% ( 2) 00:13:26.214 3.556 - 3.579: 98.5990% ( 1) 00:13:26.214 3.579 - 3.603: 98.6067% ( 1) 00:13:26.214 3.603 - 3.627: 98.6144% ( 1) 00:13:26.214 3.627 - 3.650: 98.6222% ( 1) 00:13:26.214 3.674 - 3.698: 98.6299% ( 1) 00:13:26.214 3.698 - 3.721: 98.6531% ( 3) 00:13:26.214 3.721 - 3.745: 98.6609% ( 1) 00:13:26.214 3.745 - 3.769: 98.6686% ( 1) 00:13:26.214 3.793 - 3.816: 98.6764% ( 1) 00:13:26.214 3.840 - 3.864: 98.6841% ( 1) 00:13:26.214 3.887 - 3.911: 98.6918% ( 1) 00:13:26.214 3.982 - 4.006: 98.6996% ( 1) 00:13:26.214 4.006 - 4.030: 98.7073% ( 1) 00:13:26.214 4.196 - 4.219: 98.7151% ( 1) 00:13:26.214 4.646 - 4.670: 98.7228% ( 1) 00:13:26.214 5.404 - 5.428: 98.7460% ( 3) 00:13:26.214 5.523 - 5.547: 98.7538% ( 1) 00:13:26.214 5.641 - 5.665: 98.7615% ( 1) 00:13:26.214 5.689 - 5.713: 98.7693% ( 1) 00:13:26.214 5.760 - 5.784: 98.7770% ( 1) 00:13:26.214 5.807 - 5.831: 98.7847% ( 1) 00:13:26.214 5.973 - 5.997: 98.7925% ( 1) 00:13:26.214 6.068 - 6.116: 98.8002% ( 1) 00:13:26.214 6.163 - 6.210: 98.8080% ( 1) 00:13:26.214 6.210 - 6.258: 98.8234% ( 2) 00:13:26.214 6.353 - 6.400: 98.8312% ( 1) 00:13:26.214 6.400 - 6.447: 98.8389% ( 1) 00:13:26.214 6.637 - 6.684: 98.8467% ( 1) 00:13:26.214 6.779 - 6.827: 98.8544% ( 1) 00:13:26.214 7.159 - 7.206: 98.8621% ( 1) 00:13:26.214 8.012 - 8.059: 98.8699% ( 1) 00:13:26.214 8.154 - 8.201: 98.8776% ( 1) 00:13:26.214 15.455 - 15.550: 98.8854% ( 1) 00:13:26.214 15.550 - 15.644: 98.8931% ( 1) 00:13:26.214 15.644 - 15.739: 98.9163% ( 3) 00:13:26.214 15.834 - 15.929: 98.9395% ( 3) 00:13:26.214 15.929 - 16.024: 98.9628% ( 3) 00:13:26.214 16.024 - 16.119: 98.9860% ( 3) 00:13:26.214 16.119 - 16.213: 99.0170% ( 4) 00:13:26.214 16.213 - 16.308: 99.0479% ( 4) 00:13:26.214 16.308 - 16.403: 99.0789% ( 4) 00:13:26.214 16.403 - 16.498: 99.0944% ( 2) 00:13:26.214 16.498 - 16.593: 9[2024-07-25 14:14:55.612035] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.214 9.1563% ( 8) 00:13:26.214 16.593 - 16.687: 99.2105% ( 7) 00:13:26.214 16.687 - 16.782: 99.2337% ( 3) 00:13:26.214 16.782 - 16.877: 99.2492% ( 2) 00:13:26.214 16.877 - 16.972: 99.2646% ( 2) 00:13:26.214 16.972 - 17.067: 99.2801% ( 2) 00:13:26.214 17.067 - 17.161: 99.3034% ( 3) 00:13:26.214 17.161 - 17.256: 99.3188% ( 2) 00:13:26.214 17.256 - 17.351: 99.3266% ( 1) 00:13:26.214 17.446 - 17.541: 99.3421% ( 2) 00:13:26.214 17.825 - 17.920: 99.3498% ( 1) 00:13:26.214 17.920 - 18.015: 99.3575% ( 1) 00:13:26.214 18.110 - 18.204: 99.3653% ( 1) 00:13:26.214 18.584 - 18.679: 99.3730% ( 1) 00:13:26.214 19.153 - 19.247: 99.3808% ( 1) 00:13:26.214 19.911 - 20.006: 99.3885% ( 1) 00:13:26.214 2997.665 - 3009.801: 99.3962% ( 1) 00:13:26.214 3009.801 - 3021.938: 99.4040% ( 1) 00:13:26.214 3021.938 - 3034.074: 99.4117% ( 1) 00:13:26.214 3543.799 - 3568.071: 99.4195% ( 1) 00:13:26.214 3980.705 - 4004.978: 99.8607% ( 57) 00:13:26.214 4004.978 - 4029.250: 99.9923% ( 17) 00:13:26.214 4975.881 - 5000.154: 100.0000% ( 1) 00:13:26.214 00:13:26.214 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:26.214 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:26.214 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:26.214 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:26.214 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:26.473 [ 00:13:26.473 { 00:13:26.473 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:26.473 "subtype": "Discovery", 00:13:26.473 "listen_addresses": [], 00:13:26.473 "allow_any_host": true, 00:13:26.473 "hosts": [] 00:13:26.473 }, 00:13:26.473 { 00:13:26.473 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:26.473 "subtype": "NVMe", 00:13:26.473 "listen_addresses": [ 00:13:26.473 { 00:13:26.473 "trtype": "VFIOUSER", 00:13:26.473 "adrfam": "IPv4", 00:13:26.473 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:26.473 "trsvcid": "0" 00:13:26.473 } 00:13:26.473 ], 00:13:26.473 "allow_any_host": true, 00:13:26.473 "hosts": [], 00:13:26.473 "serial_number": "SPDK1", 00:13:26.473 "model_number": "SPDK bdev Controller", 00:13:26.473 "max_namespaces": 32, 00:13:26.473 "min_cntlid": 1, 00:13:26.473 "max_cntlid": 65519, 00:13:26.473 "namespaces": [ 00:13:26.473 { 00:13:26.473 "nsid": 1, 00:13:26.473 "bdev_name": "Malloc1", 00:13:26.473 "name": "Malloc1", 00:13:26.473 "nguid": "E78EE0A5BE304F9DAF21DFB61EC04463", 00:13:26.473 "uuid": "e78ee0a5-be30-4f9d-af21-dfb61ec04463" 00:13:26.473 }, 00:13:26.473 { 00:13:26.473 "nsid": 2, 00:13:26.473 "bdev_name": "Malloc3", 00:13:26.473 "name": "Malloc3", 00:13:26.473 "nguid": "2B8AE27FB3FD4FDF89F5F3064E5B971A", 00:13:26.473 "uuid": "2b8ae27f-b3fd-4fdf-89f5-f3064e5b971a" 00:13:26.473 } 00:13:26.473 ] 00:13:26.473 }, 00:13:26.473 { 00:13:26.473 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:26.473 "subtype": "NVMe", 00:13:26.473 "listen_addresses": [ 00:13:26.473 { 00:13:26.473 "trtype": "VFIOUSER", 00:13:26.473 "adrfam": "IPv4", 00:13:26.473 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:26.473 "trsvcid": "0" 00:13:26.473 } 00:13:26.473 ], 00:13:26.473 "allow_any_host": true, 00:13:26.473 "hosts": [], 00:13:26.473 "serial_number": "SPDK2", 00:13:26.473 "model_number": "SPDK bdev Controller", 00:13:26.473 "max_namespaces": 32, 00:13:26.473 "min_cntlid": 1, 00:13:26.473 "max_cntlid": 65519, 00:13:26.473 "namespaces": [ 00:13:26.473 { 00:13:26.473 "nsid": 1, 00:13:26.473 "bdev_name": "Malloc2", 00:13:26.473 "name": "Malloc2", 00:13:26.473 "nguid": "46BD884134F2480D81AC24E391775428", 00:13:26.473 "uuid": "46bd8841-34f2-480d-81ac-24e391775428" 00:13:26.473 } 00:13:26.473 ] 00:13:26.473 } 00:13:26.473 ] 00:13:26.473 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:26.473 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=899679 00:13:26.473 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:26.473 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:26.473 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:26.474 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.474 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:26.474 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:26.474 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:26.474 14:14:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:26.474 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.474 [2024-07-25 14:14:56.099553] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.732 Malloc4 00:13:26.732 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:26.990 [2024-07-25 14:14:56.455133] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.990 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:26.990 Asynchronous Event Request test 00:13:26.990 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.990 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:26.990 Registering asynchronous event callbacks... 00:13:26.990 Starting namespace attribute notice tests for all controllers... 00:13:26.990 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:26.990 aer_cb - Changed Namespace 00:13:26.990 Cleaning up... 00:13:27.248 [ 00:13:27.248 { 00:13:27.248 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:27.248 "subtype": "Discovery", 00:13:27.248 "listen_addresses": [], 00:13:27.248 "allow_any_host": true, 00:13:27.248 "hosts": [] 00:13:27.248 }, 00:13:27.248 { 00:13:27.248 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:27.248 "subtype": "NVMe", 00:13:27.248 "listen_addresses": [ 00:13:27.248 { 00:13:27.248 "trtype": "VFIOUSER", 00:13:27.248 "adrfam": "IPv4", 00:13:27.248 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:27.248 "trsvcid": "0" 00:13:27.248 } 00:13:27.248 ], 00:13:27.248 "allow_any_host": true, 00:13:27.248 "hosts": [], 00:13:27.248 "serial_number": "SPDK1", 00:13:27.248 "model_number": "SPDK bdev Controller", 00:13:27.248 "max_namespaces": 32, 00:13:27.248 "min_cntlid": 1, 00:13:27.248 "max_cntlid": 65519, 00:13:27.248 "namespaces": [ 00:13:27.248 { 00:13:27.248 "nsid": 1, 00:13:27.248 "bdev_name": "Malloc1", 00:13:27.248 "name": "Malloc1", 00:13:27.248 "nguid": "E78EE0A5BE304F9DAF21DFB61EC04463", 00:13:27.248 "uuid": "e78ee0a5-be30-4f9d-af21-dfb61ec04463" 00:13:27.248 }, 00:13:27.248 { 00:13:27.248 "nsid": 2, 00:13:27.248 "bdev_name": "Malloc3", 00:13:27.248 "name": "Malloc3", 00:13:27.248 "nguid": "2B8AE27FB3FD4FDF89F5F3064E5B971A", 00:13:27.248 "uuid": "2b8ae27f-b3fd-4fdf-89f5-f3064e5b971a" 00:13:27.248 } 00:13:27.248 ] 00:13:27.248 }, 00:13:27.248 { 00:13:27.248 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:27.248 "subtype": "NVMe", 00:13:27.248 "listen_addresses": [ 00:13:27.248 { 00:13:27.248 "trtype": "VFIOUSER", 00:13:27.248 "adrfam": "IPv4", 00:13:27.248 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:27.248 "trsvcid": "0" 00:13:27.248 } 00:13:27.248 ], 00:13:27.248 "allow_any_host": true, 00:13:27.248 "hosts": [], 00:13:27.248 "serial_number": "SPDK2", 00:13:27.248 "model_number": "SPDK bdev Controller", 00:13:27.248 "max_namespaces": 32, 00:13:27.248 "min_cntlid": 1, 00:13:27.248 "max_cntlid": 65519, 00:13:27.248 "namespaces": [ 00:13:27.248 { 00:13:27.248 "nsid": 1, 00:13:27.248 "bdev_name": "Malloc2", 00:13:27.248 "name": "Malloc2", 00:13:27.248 "nguid": "46BD884134F2480D81AC24E391775428", 00:13:27.248 "uuid": "46bd8841-34f2-480d-81ac-24e391775428" 00:13:27.248 }, 00:13:27.248 { 00:13:27.248 "nsid": 2, 00:13:27.248 "bdev_name": "Malloc4", 00:13:27.248 "name": "Malloc4", 00:13:27.248 "nguid": "12973F688D884FA8ADAA1551AC579A8D", 00:13:27.248 "uuid": "12973f68-8d88-4fa8-adaa-1551ac579a8d" 00:13:27.248 } 00:13:27.248 ] 00:13:27.248 } 00:13:27.248 ] 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 899679 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 894086 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 894086 ']' 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 894086 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894086 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894086' 00:13:27.248 killing process with pid 894086 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 894086 00:13:27.248 14:14:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 894086 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=899823 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 899823' 00:13:27.507 Process pid: 899823 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 899823 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 899823 ']' 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:27.507 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:27.507 [2024-07-25 14:14:57.140097] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:27.507 [2024-07-25 14:14:57.141094] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:13:27.507 [2024-07-25 14:14:57.141152] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:27.766 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.766 [2024-07-25 14:14:57.198751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:27.766 [2024-07-25 14:14:57.297170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:27.766 [2024-07-25 14:14:57.297225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:27.766 [2024-07-25 14:14:57.297254] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:27.766 [2024-07-25 14:14:57.297265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:27.767 [2024-07-25 14:14:57.297275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:27.767 [2024-07-25 14:14:57.297357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.767 [2024-07-25 14:14:57.297422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:27.767 [2024-07-25 14:14:57.297491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:27.767 [2024-07-25 14:14:57.297494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.767 [2024-07-25 14:14:57.393050] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:27.767 [2024-07-25 14:14:57.393308] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:27.767 [2024-07-25 14:14:57.393569] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:27.767 [2024-07-25 14:14:57.394229] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:27.767 [2024-07-25 14:14:57.394465] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:28.026 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:28.026 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:28.026 14:14:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:28.960 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:29.218 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:29.218 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:29.218 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:29.218 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:29.218 14:14:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:29.475 Malloc1 00:13:29.475 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:29.733 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:29.990 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:30.248 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:30.248 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:30.248 14:14:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:30.814 Malloc2 00:13:30.814 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:30.814 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:31.071 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 899823 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 899823 ']' 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 899823 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:31.330 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 899823 00:13:31.608 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:31.608 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:31.608 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 899823' 00:13:31.608 killing process with pid 899823 00:13:31.608 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 899823 00:13:31.608 14:15:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 899823 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:31.874 00:13:31.874 real 0m52.753s 00:13:31.874 user 3m27.939s 00:13:31.874 sys 0m4.531s 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:31.874 ************************************ 00:13:31.874 END TEST nvmf_vfio_user 00:13:31.874 ************************************ 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:31.874 ************************************ 00:13:31.874 START TEST nvmf_vfio_user_nvme_compliance 00:13:31.874 ************************************ 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:31.874 * Looking for test storage... 00:13:31.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:31.874 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=900493 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 900493' 00:13:31.875 Process pid: 900493 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 900493 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 900493 ']' 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:31.875 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:31.875 [2024-07-25 14:15:01.456153] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:13:31.875 [2024-07-25 14:15:01.456258] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.875 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.875 [2024-07-25 14:15:01.515505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.134 [2024-07-25 14:15:01.625355] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.134 [2024-07-25 14:15:01.625427] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.134 [2024-07-25 14:15:01.625440] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.134 [2024-07-25 14:15:01.625451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.134 [2024-07-25 14:15:01.625476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.134 [2024-07-25 14:15:01.625581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.134 [2024-07-25 14:15:01.625711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.134 [2024-07-25 14:15:01.625714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.134 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.134 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:32.134 14:15:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 malloc0 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.515 14:15:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:33.515 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.515 00:13:33.515 00:13:33.515 CUnit - A unit testing framework for C - Version 2.1-3 00:13:33.515 http://cunit.sourceforge.net/ 00:13:33.515 00:13:33.515 00:13:33.515 Suite: nvme_compliance 00:13:33.515 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 14:15:02.969588] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.515 [2024-07-25 14:15:02.971092] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:33.515 [2024-07-25 14:15:02.971132] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:33.515 [2024-07-25 14:15:02.971146] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:33.515 [2024-07-25 14:15:02.972610] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.515 passed 00:13:33.515 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 14:15:03.058194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.515 [2024-07-25 14:15:03.061213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.515 passed 00:13:33.515 Test: admin_identify_ns ...[2024-07-25 14:15:03.143560] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.776 [2024-07-25 14:15:03.204090] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:33.776 [2024-07-25 14:15:03.212075] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:33.776 [2024-07-25 14:15:03.233223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.776 passed 00:13:33.776 Test: admin_get_features_mandatory_features ...[2024-07-25 14:15:03.315272] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.776 [2024-07-25 14:15:03.318291] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:33.776 passed 00:13:33.776 Test: admin_get_features_optional_features ...[2024-07-25 14:15:03.402848] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:33.776 [2024-07-25 14:15:03.405875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.034 passed 00:13:34.034 Test: admin_set_features_number_of_queues ...[2024-07-25 14:15:03.492548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.034 [2024-07-25 14:15:03.598177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.034 passed 00:13:34.034 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 14:15:03.678720] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.034 [2024-07-25 14:15:03.683754] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.292 passed 00:13:34.292 Test: admin_get_log_page_with_lpo ...[2024-07-25 14:15:03.766558] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.292 [2024-07-25 14:15:03.836093] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:34.292 [2024-07-25 14:15:03.849154] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.292 passed 00:13:34.292 Test: fabric_property_get ...[2024-07-25 14:15:03.931663] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.292 [2024-07-25 14:15:03.932929] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:34.292 [2024-07-25 14:15:03.934683] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.596 passed 00:13:34.596 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 14:15:04.018240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.596 [2024-07-25 14:15:04.019551] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:34.596 [2024-07-25 14:15:04.021261] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.596 passed 00:13:34.596 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 14:15:04.105570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.596 [2024-07-25 14:15:04.189073] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.596 [2024-07-25 14:15:04.205086] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.596 [2024-07-25 14:15:04.210187] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.596 passed 00:13:34.857 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 14:15:04.292752] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.857 [2024-07-25 14:15:04.294029] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:34.857 [2024-07-25 14:15:04.295772] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:34.857 passed 00:13:34.857 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 14:15:04.376911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:34.857 [2024-07-25 14:15:04.452074] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:34.857 [2024-07-25 14:15:04.476069] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:34.857 [2024-07-25 14:15:04.481191] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.115 passed 00:13:35.115 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 14:15:04.564637] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.115 [2024-07-25 14:15:04.565907] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:35.115 [2024-07-25 14:15:04.565963] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:35.115 [2024-07-25 14:15:04.567654] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.115 passed 00:13:35.115 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 14:15:04.650785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.115 [2024-07-25 14:15:04.742083] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:35.115 [2024-07-25 14:15:04.750084] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:35.115 [2024-07-25 14:15:04.758068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:35.115 [2024-07-25 14:15:04.766076] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:35.374 [2024-07-25 14:15:04.795180] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.374 passed 00:13:35.374 Test: admin_create_io_sq_verify_pc ...[2024-07-25 14:15:04.878738] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:35.374 [2024-07-25 14:15:04.895081] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:35.375 [2024-07-25 14:15:04.913105] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:35.375 passed 00:13:35.375 Test: admin_create_io_qp_max_qps ...[2024-07-25 14:15:04.995665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:36.750 [2024-07-25 14:15:06.086078] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:37.008 [2024-07-25 14:15:06.477268] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.008 passed 00:13:37.008 Test: admin_create_io_sq_shared_cq ...[2024-07-25 14:15:06.559522] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:37.266 [2024-07-25 14:15:06.694081] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:37.266 [2024-07-25 14:15:06.731156] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:37.266 passed 00:13:37.266 00:13:37.266 Run Summary: Type Total Ran Passed Failed Inactive 00:13:37.266 suites 1 1 n/a 0 0 00:13:37.266 tests 18 18 18 0 0 00:13:37.266 asserts 360 360 360 0 n/a 00:13:37.266 00:13:37.266 Elapsed time = 1.556 seconds 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 900493 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 900493 ']' 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 900493 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 900493 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 900493' 00:13:37.266 killing process with pid 900493 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 900493 00:13:37.266 14:15:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 900493 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:37.524 00:13:37.524 real 0m5.750s 00:13:37.524 user 0m16.100s 00:13:37.524 sys 0m0.530s 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:37.524 ************************************ 00:13:37.524 END TEST nvmf_vfio_user_nvme_compliance 00:13:37.524 ************************************ 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.524 ************************************ 00:13:37.524 START TEST nvmf_vfio_user_fuzz 00:13:37.524 ************************************ 00:13:37.524 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:37.783 * Looking for test storage... 00:13:37.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=901336 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 901336' 00:13:37.783 Process pid: 901336 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 901336 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 901336 ']' 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.783 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.042 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.042 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:38.042 14:15:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:38.978 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.979 malloc0 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:38.979 14:15:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:11.043 Fuzzing completed. Shutting down the fuzz application 00:14:11.043 00:14:11.043 Dumping successful admin opcodes: 00:14:11.043 8, 9, 10, 24, 00:14:11.043 Dumping successful io opcodes: 00:14:11.043 0, 00:14:11.043 NS: 0x200003a1ef00 I/O qp, Total commands completed: 625497, total successful commands: 2425, random_seed: 1864879744 00:14:11.043 NS: 0x200003a1ef00 admin qp, Total commands completed: 80010, total successful commands: 631, random_seed: 772343232 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 901336 ']' 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 901336' 00:14:11.043 killing process with pid 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 901336 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:11.043 00:14:11.043 real 0m32.275s 00:14:11.043 user 0m30.201s 00:14:11.043 sys 0m29.080s 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 ************************************ 00:14:11.043 END TEST nvmf_vfio_user_fuzz 00:14:11.043 ************************************ 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.043 ************************************ 00:14:11.043 START TEST nvmf_auth_target 00:14:11.043 ************************************ 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:11.043 * Looking for test storage... 00:14:11.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.043 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:11.044 14:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:11.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:11.981 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:11.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:11.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:11.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.982 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:12.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:12.243 00:14:12.243 --- 10.0.0.2 ping statistics --- 00:14:12.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.243 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.243 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.243 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:14:12.243 00:14:12.243 --- 10.0.0.1 ping statistics --- 00:14:12.243 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.243 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=907209 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 907209 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 907209 ']' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.243 14:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=907230 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=154cc1c3a2edd8d3240e24c1749fc6e630855be996a84bd6 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.jkN 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 154cc1c3a2edd8d3240e24c1749fc6e630855be996a84bd6 0 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 154cc1c3a2edd8d3240e24c1749fc6e630855be996a84bd6 0 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=154cc1c3a2edd8d3240e24c1749fc6e630855be996a84bd6 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:12.510 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.jkN 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.jkN 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.jkN 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=84023273f820b8eb9f56f7b8c6dced13124a2a241b0c9584b5acec38fcbbc62c 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.eCy 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 84023273f820b8eb9f56f7b8c6dced13124a2a241b0c9584b5acec38fcbbc62c 3 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 84023273f820b8eb9f56f7b8c6dced13124a2a241b0c9584b5acec38fcbbc62c 3 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=84023273f820b8eb9f56f7b8c6dced13124a2a241b0c9584b5acec38fcbbc62c 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.eCy 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.eCy 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.eCy 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1b18b5cf59646c7b1973596a06409492 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ieG 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1b18b5cf59646c7b1973596a06409492 1 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1b18b5cf59646c7b1973596a06409492 1 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1b18b5cf59646c7b1973596a06409492 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ieG 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ieG 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.ieG 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.769 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ab3cee8b26d6551e4f7bb0a8610feb9c03c451112921ccd8 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.IIU 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ab3cee8b26d6551e4f7bb0a8610feb9c03c451112921ccd8 2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ab3cee8b26d6551e4f7bb0a8610feb9c03c451112921ccd8 2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ab3cee8b26d6551e4f7bb0a8610feb9c03c451112921ccd8 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.IIU 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.IIU 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.IIU 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6bac88a3c968f73f1c69541cf3fd4a18a730d44db2488b94 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.G8K 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6bac88a3c968f73f1c69541cf3fd4a18a730d44db2488b94 2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6bac88a3c968f73f1c69541cf3fd4a18a730d44db2488b94 2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6bac88a3c968f73f1c69541cf3fd4a18a730d44db2488b94 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.G8K 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.G8K 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.G8K 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=01ac374dbe90397b7cb4ace0f7400d05 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GLi 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 01ac374dbe90397b7cb4ace0f7400d05 1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 01ac374dbe90397b7cb4ace0f7400d05 1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=01ac374dbe90397b7cb4ace0f7400d05 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:12.770 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GLi 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GLi 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.GLi 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:13.028 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf793b038945153aed4c19e6544fd94265547f35e817637ed798ef493a3f5e2a 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fFr 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf793b038945153aed4c19e6544fd94265547f35e817637ed798ef493a3f5e2a 3 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf793b038945153aed4c19e6544fd94265547f35e817637ed798ef493a3f5e2a 3 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf793b038945153aed4c19e6544fd94265547f35e817637ed798ef493a3f5e2a 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fFr 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fFr 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.fFr 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 907209 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 907209 ']' 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.029 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 907230 /var/tmp/host.sock 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 907230 ']' 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:13.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:13.286 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.544 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:13.544 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:13.544 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.544 14:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.jkN 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.jkN 00:14:13.544 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.jkN 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.eCy ]] 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCy 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCy 00:14:13.802 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.eCy 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.ieG 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.ieG 00:14:14.061 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.ieG 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.IIU ]] 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IIU 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IIU 00:14:14.318 14:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IIU 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.G8K 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.G8K 00:14:14.576 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.G8K 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.GLi ]] 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GLi 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GLi 00:14:14.835 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GLi 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.fFr 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.fFr 00:14:15.114 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.fFr 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.376 14:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.633 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:15.890 00:14:15.890 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:15.890 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:15.890 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.147 { 00:14:16.147 "cntlid": 1, 00:14:16.147 "qid": 0, 00:14:16.147 "state": "enabled", 00:14:16.147 "thread": "nvmf_tgt_poll_group_000", 00:14:16.147 "listen_address": { 00:14:16.147 "trtype": "TCP", 00:14:16.147 "adrfam": "IPv4", 00:14:16.147 "traddr": "10.0.0.2", 00:14:16.147 "trsvcid": "4420" 00:14:16.147 }, 00:14:16.147 "peer_address": { 00:14:16.147 "trtype": "TCP", 00:14:16.147 "adrfam": "IPv4", 00:14:16.147 "traddr": "10.0.0.1", 00:14:16.147 "trsvcid": "59114" 00:14:16.147 }, 00:14:16.147 "auth": { 00:14:16.147 "state": "completed", 00:14:16.147 "digest": "sha256", 00:14:16.147 "dhgroup": "null" 00:14:16.147 } 00:14:16.147 } 00:14:16.147 ]' 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.147 14:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:16.404 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.337 14:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.595 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:17.854 00:14:17.854 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.854 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.854 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.112 { 00:14:18.112 "cntlid": 3, 00:14:18.112 "qid": 0, 00:14:18.112 "state": "enabled", 00:14:18.112 "thread": "nvmf_tgt_poll_group_000", 00:14:18.112 "listen_address": { 00:14:18.112 "trtype": "TCP", 00:14:18.112 "adrfam": "IPv4", 00:14:18.112 "traddr": "10.0.0.2", 00:14:18.112 "trsvcid": "4420" 00:14:18.112 }, 00:14:18.112 "peer_address": { 00:14:18.112 "trtype": "TCP", 00:14:18.112 "adrfam": "IPv4", 00:14:18.112 "traddr": "10.0.0.1", 00:14:18.112 "trsvcid": "59136" 00:14:18.112 }, 00:14:18.112 "auth": { 00:14:18.112 "state": "completed", 00:14:18.112 "digest": "sha256", 00:14:18.112 "dhgroup": "null" 00:14:18.112 } 00:14:18.112 } 00:14:18.112 ]' 00:14:18.112 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.370 14:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.628 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:14:19.564 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.564 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.564 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:19.564 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.564 14:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.564 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.564 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.564 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.564 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:19.822 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.080 00:14:20.080 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.080 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.080 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.338 { 00:14:20.338 "cntlid": 5, 00:14:20.338 "qid": 0, 00:14:20.338 "state": "enabled", 00:14:20.338 "thread": "nvmf_tgt_poll_group_000", 00:14:20.338 "listen_address": { 00:14:20.338 "trtype": "TCP", 00:14:20.338 "adrfam": "IPv4", 00:14:20.338 "traddr": "10.0.0.2", 00:14:20.338 "trsvcid": "4420" 00:14:20.338 }, 00:14:20.338 "peer_address": { 00:14:20.338 "trtype": "TCP", 00:14:20.338 "adrfam": "IPv4", 00:14:20.338 "traddr": "10.0.0.1", 00:14:20.338 "trsvcid": "47830" 00:14:20.338 }, 00:14:20.338 "auth": { 00:14:20.338 "state": "completed", 00:14:20.338 "digest": "sha256", 00:14:20.338 "dhgroup": "null" 00:14:20.338 } 00:14:20.338 } 00:14:20.338 ]' 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.338 14:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.598 14:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.532 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:21.790 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.048 00:14:22.048 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:22.048 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:22.048 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:22.306 { 00:14:22.306 "cntlid": 7, 00:14:22.306 "qid": 0, 00:14:22.306 "state": "enabled", 00:14:22.306 "thread": "nvmf_tgt_poll_group_000", 00:14:22.306 "listen_address": { 00:14:22.306 "trtype": "TCP", 00:14:22.306 "adrfam": "IPv4", 00:14:22.306 "traddr": "10.0.0.2", 00:14:22.306 "trsvcid": "4420" 00:14:22.306 }, 00:14:22.306 "peer_address": { 00:14:22.306 "trtype": "TCP", 00:14:22.306 "adrfam": "IPv4", 00:14:22.306 "traddr": "10.0.0.1", 00:14:22.306 "trsvcid": "47850" 00:14:22.306 }, 00:14:22.306 "auth": { 00:14:22.306 "state": "completed", 00:14:22.306 "digest": "sha256", 00:14:22.306 "dhgroup": "null" 00:14:22.306 } 00:14:22.306 } 00:14:22.306 ]' 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:22.306 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:22.564 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:22.564 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.564 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.564 14:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.823 14:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.759 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:24.325 00:14:24.325 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.325 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.325 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.584 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:24.584 { 00:14:24.584 "cntlid": 9, 00:14:24.584 "qid": 0, 00:14:24.584 "state": "enabled", 00:14:24.584 "thread": "nvmf_tgt_poll_group_000", 00:14:24.584 "listen_address": { 00:14:24.584 "trtype": "TCP", 00:14:24.584 "adrfam": "IPv4", 00:14:24.584 "traddr": "10.0.0.2", 00:14:24.584 "trsvcid": "4420" 00:14:24.584 }, 00:14:24.584 "peer_address": { 00:14:24.584 "trtype": "TCP", 00:14:24.584 "adrfam": "IPv4", 00:14:24.584 "traddr": "10.0.0.1", 00:14:24.584 "trsvcid": "47878" 00:14:24.584 }, 00:14:24.584 "auth": { 00:14:24.584 "state": "completed", 00:14:24.584 "digest": "sha256", 00:14:24.584 "dhgroup": "ffdhe2048" 00:14:24.584 } 00:14:24.584 } 00:14:24.585 ]' 00:14:24.585 14:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.585 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.843 14:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:14:25.778 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.778 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.778 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.779 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.779 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.779 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.779 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.779 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.036 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.294 00:14:26.295 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.295 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.295 14:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.552 { 00:14:26.552 "cntlid": 11, 00:14:26.552 "qid": 0, 00:14:26.552 "state": "enabled", 00:14:26.552 "thread": "nvmf_tgt_poll_group_000", 00:14:26.552 "listen_address": { 00:14:26.552 "trtype": "TCP", 00:14:26.552 "adrfam": "IPv4", 00:14:26.552 "traddr": "10.0.0.2", 00:14:26.552 "trsvcid": "4420" 00:14:26.552 }, 00:14:26.552 "peer_address": { 00:14:26.552 "trtype": "TCP", 00:14:26.552 "adrfam": "IPv4", 00:14:26.552 "traddr": "10.0.0.1", 00:14:26.552 "trsvcid": "47910" 00:14:26.552 }, 00:14:26.552 "auth": { 00:14:26.552 "state": "completed", 00:14:26.552 "digest": "sha256", 00:14:26.552 "dhgroup": "ffdhe2048" 00:14:26.552 } 00:14:26.552 } 00:14:26.552 ]' 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.552 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.811 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.811 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.811 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.071 14:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.008 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.575 00:14:28.575 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:28.576 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:28.576 14:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:28.576 { 00:14:28.576 "cntlid": 13, 00:14:28.576 "qid": 0, 00:14:28.576 "state": "enabled", 00:14:28.576 "thread": "nvmf_tgt_poll_group_000", 00:14:28.576 "listen_address": { 00:14:28.576 "trtype": "TCP", 00:14:28.576 "adrfam": "IPv4", 00:14:28.576 "traddr": "10.0.0.2", 00:14:28.576 "trsvcid": "4420" 00:14:28.576 }, 00:14:28.576 "peer_address": { 00:14:28.576 "trtype": "TCP", 00:14:28.576 "adrfam": "IPv4", 00:14:28.576 "traddr": "10.0.0.1", 00:14:28.576 "trsvcid": "47944" 00:14:28.576 }, 00:14:28.576 "auth": { 00:14:28.576 "state": "completed", 00:14:28.576 "digest": "sha256", 00:14:28.576 "dhgroup": "ffdhe2048" 00:14:28.576 } 00:14:28.576 } 00:14:28.576 ]' 00:14:28.576 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.834 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.091 14:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.028 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.286 14:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:30.544 00:14:30.544 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.544 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.544 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.802 { 00:14:30.802 "cntlid": 15, 00:14:30.802 "qid": 0, 00:14:30.802 "state": "enabled", 00:14:30.802 "thread": "nvmf_tgt_poll_group_000", 00:14:30.802 "listen_address": { 00:14:30.802 "trtype": "TCP", 00:14:30.802 "adrfam": "IPv4", 00:14:30.802 "traddr": "10.0.0.2", 00:14:30.802 "trsvcid": "4420" 00:14:30.802 }, 00:14:30.802 "peer_address": { 00:14:30.802 "trtype": "TCP", 00:14:30.802 "adrfam": "IPv4", 00:14:30.802 "traddr": "10.0.0.1", 00:14:30.802 "trsvcid": "55464" 00:14:30.802 }, 00:14:30.802 "auth": { 00:14:30.802 "state": "completed", 00:14:30.802 "digest": "sha256", 00:14:30.802 "dhgroup": "ffdhe2048" 00:14:30.802 } 00:14:30.802 } 00:14:30.802 ]' 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:30.802 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.084 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.084 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.084 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.084 14:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.024 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.282 14:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:32.540 00:14:32.540 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.540 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.540 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.798 { 00:14:32.798 "cntlid": 17, 00:14:32.798 "qid": 0, 00:14:32.798 "state": "enabled", 00:14:32.798 "thread": "nvmf_tgt_poll_group_000", 00:14:32.798 "listen_address": { 00:14:32.798 "trtype": "TCP", 00:14:32.798 "adrfam": "IPv4", 00:14:32.798 "traddr": "10.0.0.2", 00:14:32.798 "trsvcid": "4420" 00:14:32.798 }, 00:14:32.798 "peer_address": { 00:14:32.798 "trtype": "TCP", 00:14:32.798 "adrfam": "IPv4", 00:14:32.798 "traddr": "10.0.0.1", 00:14:32.798 "trsvcid": "55504" 00:14:32.798 }, 00:14:32.798 "auth": { 00:14:32.798 "state": "completed", 00:14:32.798 "digest": "sha256", 00:14:32.798 "dhgroup": "ffdhe3072" 00:14:32.798 } 00:14:32.798 } 00:14:32.798 ]' 00:14:32.798 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.056 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.363 14:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.299 14:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:34.866 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.866 { 00:14:34.866 "cntlid": 19, 00:14:34.866 "qid": 0, 00:14:34.866 "state": "enabled", 00:14:34.866 "thread": "nvmf_tgt_poll_group_000", 00:14:34.866 "listen_address": { 00:14:34.866 "trtype": "TCP", 00:14:34.866 "adrfam": "IPv4", 00:14:34.866 "traddr": "10.0.0.2", 00:14:34.866 "trsvcid": "4420" 00:14:34.866 }, 00:14:34.866 "peer_address": { 00:14:34.866 "trtype": "TCP", 00:14:34.866 "adrfam": "IPv4", 00:14:34.866 "traddr": "10.0.0.1", 00:14:34.866 "trsvcid": "55536" 00:14:34.866 }, 00:14:34.866 "auth": { 00:14:34.866 "state": "completed", 00:14:34.866 "digest": "sha256", 00:14:34.866 "dhgroup": "ffdhe3072" 00:14:34.866 } 00:14:34.866 } 00:14:34.866 ]' 00:14:34.866 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.124 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.382 14:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.317 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.318 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:36.577 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:36.577 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.577 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.577 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:36.577 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.578 14:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:36.836 00:14:36.836 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:36.836 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:36.836 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.093 { 00:14:37.093 "cntlid": 21, 00:14:37.093 "qid": 0, 00:14:37.093 "state": "enabled", 00:14:37.093 "thread": "nvmf_tgt_poll_group_000", 00:14:37.093 "listen_address": { 00:14:37.093 "trtype": "TCP", 00:14:37.093 "adrfam": "IPv4", 00:14:37.093 "traddr": "10.0.0.2", 00:14:37.093 "trsvcid": "4420" 00:14:37.093 }, 00:14:37.093 "peer_address": { 00:14:37.093 "trtype": "TCP", 00:14:37.093 "adrfam": "IPv4", 00:14:37.093 "traddr": "10.0.0.1", 00:14:37.093 "trsvcid": "55560" 00:14:37.093 }, 00:14:37.093 "auth": { 00:14:37.093 "state": "completed", 00:14:37.093 "digest": "sha256", 00:14:37.093 "dhgroup": "ffdhe3072" 00:14:37.093 } 00:14:37.093 } 00:14:37.093 ]' 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:37.093 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.094 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.094 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.094 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.351 14:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.282 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.282 14:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.540 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.798 00:14:38.798 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.798 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.798 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.057 { 00:14:39.057 "cntlid": 23, 00:14:39.057 "qid": 0, 00:14:39.057 "state": "enabled", 00:14:39.057 "thread": "nvmf_tgt_poll_group_000", 00:14:39.057 "listen_address": { 00:14:39.057 "trtype": "TCP", 00:14:39.057 "adrfam": "IPv4", 00:14:39.057 "traddr": "10.0.0.2", 00:14:39.057 "trsvcid": "4420" 00:14:39.057 }, 00:14:39.057 "peer_address": { 00:14:39.057 "trtype": "TCP", 00:14:39.057 "adrfam": "IPv4", 00:14:39.057 "traddr": "10.0.0.1", 00:14:39.057 "trsvcid": "55586" 00:14:39.057 }, 00:14:39.057 "auth": { 00:14:39.057 "state": "completed", 00:14:39.057 "digest": "sha256", 00:14:39.057 "dhgroup": "ffdhe3072" 00:14:39.057 } 00:14:39.057 } 00:14:39.057 ]' 00:14:39.057 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.315 14:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.573 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.508 14:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.766 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.767 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.767 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.767 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.767 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:41.024 00:14:41.024 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.024 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.024 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.282 { 00:14:41.282 "cntlid": 25, 00:14:41.282 "qid": 0, 00:14:41.282 "state": "enabled", 00:14:41.282 "thread": "nvmf_tgt_poll_group_000", 00:14:41.282 "listen_address": { 00:14:41.282 "trtype": "TCP", 00:14:41.282 "adrfam": "IPv4", 00:14:41.282 "traddr": "10.0.0.2", 00:14:41.282 "trsvcid": "4420" 00:14:41.282 }, 00:14:41.282 "peer_address": { 00:14:41.282 "trtype": "TCP", 00:14:41.282 "adrfam": "IPv4", 00:14:41.282 "traddr": "10.0.0.1", 00:14:41.282 "trsvcid": "53636" 00:14:41.282 }, 00:14:41.282 "auth": { 00:14:41.282 "state": "completed", 00:14:41.282 "digest": "sha256", 00:14:41.282 "dhgroup": "ffdhe4096" 00:14:41.282 } 00:14:41.282 } 00:14:41.282 ]' 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:41.282 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.540 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.540 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.540 14:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.800 14:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.737 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:43.305 00:14:43.306 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:43.306 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:43.306 14:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:43.564 { 00:14:43.564 "cntlid": 27, 00:14:43.564 "qid": 0, 00:14:43.564 "state": "enabled", 00:14:43.564 "thread": "nvmf_tgt_poll_group_000", 00:14:43.564 "listen_address": { 00:14:43.564 "trtype": "TCP", 00:14:43.564 "adrfam": "IPv4", 00:14:43.564 "traddr": "10.0.0.2", 00:14:43.564 "trsvcid": "4420" 00:14:43.564 }, 00:14:43.564 "peer_address": { 00:14:43.564 "trtype": "TCP", 00:14:43.564 "adrfam": "IPv4", 00:14:43.564 "traddr": "10.0.0.1", 00:14:43.564 "trsvcid": "53664" 00:14:43.564 }, 00:14:43.564 "auth": { 00:14:43.564 "state": "completed", 00:14:43.564 "digest": "sha256", 00:14:43.564 "dhgroup": "ffdhe4096" 00:14:43.564 } 00:14:43.564 } 00:14:43.564 ]' 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.564 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.825 14:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:44.760 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.018 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:45.585 00:14:45.585 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.585 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.585 14:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.585 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.585 { 00:14:45.585 "cntlid": 29, 00:14:45.585 "qid": 0, 00:14:45.585 "state": "enabled", 00:14:45.585 "thread": "nvmf_tgt_poll_group_000", 00:14:45.585 "listen_address": { 00:14:45.585 "trtype": "TCP", 00:14:45.585 "adrfam": "IPv4", 00:14:45.586 "traddr": "10.0.0.2", 00:14:45.586 "trsvcid": "4420" 00:14:45.586 }, 00:14:45.586 "peer_address": { 00:14:45.586 "trtype": "TCP", 00:14:45.586 "adrfam": "IPv4", 00:14:45.586 "traddr": "10.0.0.1", 00:14:45.586 "trsvcid": "53698" 00:14:45.586 }, 00:14:45.586 "auth": { 00:14:45.586 "state": "completed", 00:14:45.586 "digest": "sha256", 00:14:45.586 "dhgroup": "ffdhe4096" 00:14:45.586 } 00:14:45.586 } 00:14:45.586 ]' 00:14:45.586 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.843 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.843 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.843 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:45.843 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.844 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.844 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.844 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.101 14:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.038 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.296 14:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.566 00:14:47.566 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.566 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.566 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.851 { 00:14:47.851 "cntlid": 31, 00:14:47.851 "qid": 0, 00:14:47.851 "state": "enabled", 00:14:47.851 "thread": "nvmf_tgt_poll_group_000", 00:14:47.851 "listen_address": { 00:14:47.851 "trtype": "TCP", 00:14:47.851 "adrfam": "IPv4", 00:14:47.851 "traddr": "10.0.0.2", 00:14:47.851 "trsvcid": "4420" 00:14:47.851 }, 00:14:47.851 "peer_address": { 00:14:47.851 "trtype": "TCP", 00:14:47.851 "adrfam": "IPv4", 00:14:47.851 "traddr": "10.0.0.1", 00:14:47.851 "trsvcid": "53714" 00:14:47.851 }, 00:14:47.851 "auth": { 00:14:47.851 "state": "completed", 00:14:47.851 "digest": "sha256", 00:14:47.851 "dhgroup": "ffdhe4096" 00:14:47.851 } 00:14:47.851 } 00:14:47.851 ]' 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.851 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.110 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:48.110 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.110 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.110 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.110 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.369 14:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.302 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.303 14:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.868 00:14:49.868 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.868 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.868 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.126 { 00:14:50.126 "cntlid": 33, 00:14:50.126 "qid": 0, 00:14:50.126 "state": "enabled", 00:14:50.126 "thread": "nvmf_tgt_poll_group_000", 00:14:50.126 "listen_address": { 00:14:50.126 "trtype": "TCP", 00:14:50.126 "adrfam": "IPv4", 00:14:50.126 "traddr": "10.0.0.2", 00:14:50.126 "trsvcid": "4420" 00:14:50.126 }, 00:14:50.126 "peer_address": { 00:14:50.126 "trtype": "TCP", 00:14:50.126 "adrfam": "IPv4", 00:14:50.126 "traddr": "10.0.0.1", 00:14:50.126 "trsvcid": "53750" 00:14:50.126 }, 00:14:50.126 "auth": { 00:14:50.126 "state": "completed", 00:14:50.126 "digest": "sha256", 00:14:50.126 "dhgroup": "ffdhe6144" 00:14:50.126 } 00:14:50.126 } 00:14:50.126 ]' 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.126 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.385 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:50.385 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.385 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.385 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.385 14:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.643 14:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.580 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.838 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:52.406 00:14:52.406 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.406 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.406 14:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.664 { 00:14:52.664 "cntlid": 35, 00:14:52.664 "qid": 0, 00:14:52.664 "state": "enabled", 00:14:52.664 "thread": "nvmf_tgt_poll_group_000", 00:14:52.664 "listen_address": { 00:14:52.664 "trtype": "TCP", 00:14:52.664 "adrfam": "IPv4", 00:14:52.664 "traddr": "10.0.0.2", 00:14:52.664 "trsvcid": "4420" 00:14:52.664 }, 00:14:52.664 "peer_address": { 00:14:52.664 "trtype": "TCP", 00:14:52.664 "adrfam": "IPv4", 00:14:52.664 "traddr": "10.0.0.1", 00:14:52.664 "trsvcid": "44146" 00:14:52.664 }, 00:14:52.664 "auth": { 00:14:52.664 "state": "completed", 00:14:52.664 "digest": "sha256", 00:14:52.664 "dhgroup": "ffdhe6144" 00:14:52.664 } 00:14:52.664 } 00:14:52.664 ]' 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.664 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.924 14:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.864 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.121 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.122 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.122 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.122 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.122 14:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.689 00:14:54.690 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.690 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.690 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.948 { 00:14:54.948 "cntlid": 37, 00:14:54.948 "qid": 0, 00:14:54.948 "state": "enabled", 00:14:54.948 "thread": "nvmf_tgt_poll_group_000", 00:14:54.948 "listen_address": { 00:14:54.948 "trtype": "TCP", 00:14:54.948 "adrfam": "IPv4", 00:14:54.948 "traddr": "10.0.0.2", 00:14:54.948 "trsvcid": "4420" 00:14:54.948 }, 00:14:54.948 "peer_address": { 00:14:54.948 "trtype": "TCP", 00:14:54.948 "adrfam": "IPv4", 00:14:54.948 "traddr": "10.0.0.1", 00:14:54.948 "trsvcid": "44184" 00:14:54.948 }, 00:14:54.948 "auth": { 00:14:54.948 "state": "completed", 00:14:54.948 "digest": "sha256", 00:14:54.948 "dhgroup": "ffdhe6144" 00:14:54.948 } 00:14:54.948 } 00:14:54.948 ]' 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.948 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.206 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.206 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.206 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.463 14:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.397 14:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.655 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:57.222 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.222 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.479 { 00:14:57.479 "cntlid": 39, 00:14:57.479 "qid": 0, 00:14:57.479 "state": "enabled", 00:14:57.479 "thread": "nvmf_tgt_poll_group_000", 00:14:57.479 "listen_address": { 00:14:57.479 "trtype": "TCP", 00:14:57.479 "adrfam": "IPv4", 00:14:57.479 "traddr": "10.0.0.2", 00:14:57.479 "trsvcid": "4420" 00:14:57.479 }, 00:14:57.479 "peer_address": { 00:14:57.479 "trtype": "TCP", 00:14:57.479 "adrfam": "IPv4", 00:14:57.479 "traddr": "10.0.0.1", 00:14:57.479 "trsvcid": "44204" 00:14:57.479 }, 00:14:57.479 "auth": { 00:14:57.479 "state": "completed", 00:14:57.479 "digest": "sha256", 00:14:57.479 "dhgroup": "ffdhe6144" 00:14:57.479 } 00:14:57.479 } 00:14:57.479 ]' 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.479 14:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.735 14:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:14:58.669 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.670 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.926 14:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:59.862 00:14:59.862 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.862 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.862 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.119 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.119 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.119 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.120 { 00:15:00.120 "cntlid": 41, 00:15:00.120 "qid": 0, 00:15:00.120 "state": "enabled", 00:15:00.120 "thread": "nvmf_tgt_poll_group_000", 00:15:00.120 "listen_address": { 00:15:00.120 "trtype": "TCP", 00:15:00.120 "adrfam": "IPv4", 00:15:00.120 "traddr": "10.0.0.2", 00:15:00.120 "trsvcid": "4420" 00:15:00.120 }, 00:15:00.120 "peer_address": { 00:15:00.120 "trtype": "TCP", 00:15:00.120 "adrfam": "IPv4", 00:15:00.120 "traddr": "10.0.0.1", 00:15:00.120 "trsvcid": "44224" 00:15:00.120 }, 00:15:00.120 "auth": { 00:15:00.120 "state": "completed", 00:15:00.120 "digest": "sha256", 00:15:00.120 "dhgroup": "ffdhe8192" 00:15:00.120 } 00:15:00.120 } 00:15:00.120 ]' 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.120 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.377 14:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.313 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.314 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.314 14:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.571 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.504 00:15:02.504 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.504 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.504 14:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.762 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.762 { 00:15:02.762 "cntlid": 43, 00:15:02.762 "qid": 0, 00:15:02.762 "state": "enabled", 00:15:02.762 "thread": "nvmf_tgt_poll_group_000", 00:15:02.762 "listen_address": { 00:15:02.762 "trtype": "TCP", 00:15:02.762 "adrfam": "IPv4", 00:15:02.762 "traddr": "10.0.0.2", 00:15:02.762 "trsvcid": "4420" 00:15:02.762 }, 00:15:02.762 "peer_address": { 00:15:02.762 "trtype": "TCP", 00:15:02.762 "adrfam": "IPv4", 00:15:02.763 "traddr": "10.0.0.1", 00:15:02.763 "trsvcid": "50144" 00:15:02.763 }, 00:15:02.763 "auth": { 00:15:02.763 "state": "completed", 00:15:02.763 "digest": "sha256", 00:15:02.763 "dhgroup": "ffdhe8192" 00:15:02.763 } 00:15:02.763 } 00:15:02.763 ]' 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.763 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.021 14:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.955 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:04.242 14:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.180 00:15:05.180 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.180 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.180 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.439 { 00:15:05.439 "cntlid": 45, 00:15:05.439 "qid": 0, 00:15:05.439 "state": "enabled", 00:15:05.439 "thread": "nvmf_tgt_poll_group_000", 00:15:05.439 "listen_address": { 00:15:05.439 "trtype": "TCP", 00:15:05.439 "adrfam": "IPv4", 00:15:05.439 "traddr": "10.0.0.2", 00:15:05.439 "trsvcid": "4420" 00:15:05.439 }, 00:15:05.439 "peer_address": { 00:15:05.439 "trtype": "TCP", 00:15:05.439 "adrfam": "IPv4", 00:15:05.439 "traddr": "10.0.0.1", 00:15:05.439 "trsvcid": "50176" 00:15:05.439 }, 00:15:05.439 "auth": { 00:15:05.439 "state": "completed", 00:15:05.439 "digest": "sha256", 00:15:05.439 "dhgroup": "ffdhe8192" 00:15:05.439 } 00:15:05.439 } 00:15:05.439 ]' 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.439 14:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.698 14:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.636 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.895 14:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.834 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.834 { 00:15:07.834 "cntlid": 47, 00:15:07.834 "qid": 0, 00:15:07.834 "state": "enabled", 00:15:07.834 "thread": "nvmf_tgt_poll_group_000", 00:15:07.834 "listen_address": { 00:15:07.834 "trtype": "TCP", 00:15:07.834 "adrfam": "IPv4", 00:15:07.834 "traddr": "10.0.0.2", 00:15:07.834 "trsvcid": "4420" 00:15:07.834 }, 00:15:07.834 "peer_address": { 00:15:07.834 "trtype": "TCP", 00:15:07.834 "adrfam": "IPv4", 00:15:07.834 "traddr": "10.0.0.1", 00:15:07.834 "trsvcid": "50208" 00:15:07.834 }, 00:15:07.834 "auth": { 00:15:07.834 "state": "completed", 00:15:07.834 "digest": "sha256", 00:15:07.834 "dhgroup": "ffdhe8192" 00:15:07.834 } 00:15:07.834 } 00:15:07.834 ]' 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.834 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.092 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.092 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.092 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.351 14:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.288 14:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.854 00:15:09.854 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.854 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.854 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.112 { 00:15:10.112 "cntlid": 49, 00:15:10.112 "qid": 0, 00:15:10.112 "state": "enabled", 00:15:10.112 "thread": "nvmf_tgt_poll_group_000", 00:15:10.112 "listen_address": { 00:15:10.112 "trtype": "TCP", 00:15:10.112 "adrfam": "IPv4", 00:15:10.112 "traddr": "10.0.0.2", 00:15:10.112 "trsvcid": "4420" 00:15:10.112 }, 00:15:10.112 "peer_address": { 00:15:10.112 "trtype": "TCP", 00:15:10.112 "adrfam": "IPv4", 00:15:10.112 "traddr": "10.0.0.1", 00:15:10.112 "trsvcid": "50238" 00:15:10.112 }, 00:15:10.112 "auth": { 00:15:10.112 "state": "completed", 00:15:10.112 "digest": "sha384", 00:15:10.112 "dhgroup": "null" 00:15:10.112 } 00:15:10.112 } 00:15:10.112 ]' 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.112 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.372 14:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.309 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.309 14:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:11.568 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.569 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:12.136 00:15:12.136 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.136 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.136 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.394 { 00:15:12.394 "cntlid": 51, 00:15:12.394 "qid": 0, 00:15:12.394 "state": "enabled", 00:15:12.394 "thread": "nvmf_tgt_poll_group_000", 00:15:12.394 "listen_address": { 00:15:12.394 "trtype": "TCP", 00:15:12.394 "adrfam": "IPv4", 00:15:12.394 "traddr": "10.0.0.2", 00:15:12.394 "trsvcid": "4420" 00:15:12.394 }, 00:15:12.394 "peer_address": { 00:15:12.394 "trtype": "TCP", 00:15:12.394 "adrfam": "IPv4", 00:15:12.394 "traddr": "10.0.0.1", 00:15:12.394 "trsvcid": "38054" 00:15:12.394 }, 00:15:12.394 "auth": { 00:15:12.394 "state": "completed", 00:15:12.394 "digest": "sha384", 00:15:12.394 "dhgroup": "null" 00:15:12.394 } 00:15:12.394 } 00:15:12.394 ]' 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.394 14:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.652 14:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.591 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.853 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.110 00:15:14.110 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.110 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.110 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.368 { 00:15:14.368 "cntlid": 53, 00:15:14.368 "qid": 0, 00:15:14.368 "state": "enabled", 00:15:14.368 "thread": "nvmf_tgt_poll_group_000", 00:15:14.368 "listen_address": { 00:15:14.368 "trtype": "TCP", 00:15:14.368 "adrfam": "IPv4", 00:15:14.368 "traddr": "10.0.0.2", 00:15:14.368 "trsvcid": "4420" 00:15:14.368 }, 00:15:14.368 "peer_address": { 00:15:14.368 "trtype": "TCP", 00:15:14.368 "adrfam": "IPv4", 00:15:14.368 "traddr": "10.0.0.1", 00:15:14.368 "trsvcid": "38072" 00:15:14.368 }, 00:15:14.368 "auth": { 00:15:14.368 "state": "completed", 00:15:14.368 "digest": "sha384", 00:15:14.368 "dhgroup": "null" 00:15:14.368 } 00:15:14.368 } 00:15:14.368 ]' 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.368 14:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.368 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.368 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.368 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.628 14:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.567 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.825 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.083 00:15:16.083 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.083 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.083 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.341 { 00:15:16.341 "cntlid": 55, 00:15:16.341 "qid": 0, 00:15:16.341 "state": "enabled", 00:15:16.341 "thread": "nvmf_tgt_poll_group_000", 00:15:16.341 "listen_address": { 00:15:16.341 "trtype": "TCP", 00:15:16.341 "adrfam": "IPv4", 00:15:16.341 "traddr": "10.0.0.2", 00:15:16.341 "trsvcid": "4420" 00:15:16.341 }, 00:15:16.341 "peer_address": { 00:15:16.341 "trtype": "TCP", 00:15:16.341 "adrfam": "IPv4", 00:15:16.341 "traddr": "10.0.0.1", 00:15:16.341 "trsvcid": "38098" 00:15:16.341 }, 00:15:16.341 "auth": { 00:15:16.341 "state": "completed", 00:15:16.341 "digest": "sha384", 00:15:16.341 "dhgroup": "null" 00:15:16.341 } 00:15:16.341 } 00:15:16.341 ]' 00:15:16.341 14:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.600 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.857 14:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.792 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.049 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.307 00:15:18.307 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.307 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.307 14:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.565 { 00:15:18.565 "cntlid": 57, 00:15:18.565 "qid": 0, 00:15:18.565 "state": "enabled", 00:15:18.565 "thread": "nvmf_tgt_poll_group_000", 00:15:18.565 "listen_address": { 00:15:18.565 "trtype": "TCP", 00:15:18.565 "adrfam": "IPv4", 00:15:18.565 "traddr": "10.0.0.2", 00:15:18.565 "trsvcid": "4420" 00:15:18.565 }, 00:15:18.565 "peer_address": { 00:15:18.565 "trtype": "TCP", 00:15:18.565 "adrfam": "IPv4", 00:15:18.565 "traddr": "10.0.0.1", 00:15:18.565 "trsvcid": "38140" 00:15:18.565 }, 00:15:18.565 "auth": { 00:15:18.565 "state": "completed", 00:15:18.565 "digest": "sha384", 00:15:18.565 "dhgroup": "ffdhe2048" 00:15:18.565 } 00:15:18.565 } 00:15:18.565 ]' 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.565 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.824 14:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:19.757 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.757 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.757 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.757 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.758 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.758 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.758 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.758 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.015 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.580 00:15:20.580 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.580 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.580 14:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.580 { 00:15:20.580 "cntlid": 59, 00:15:20.580 "qid": 0, 00:15:20.580 "state": "enabled", 00:15:20.580 "thread": "nvmf_tgt_poll_group_000", 00:15:20.580 "listen_address": { 00:15:20.580 "trtype": "TCP", 00:15:20.580 "adrfam": "IPv4", 00:15:20.580 "traddr": "10.0.0.2", 00:15:20.580 "trsvcid": "4420" 00:15:20.580 }, 00:15:20.580 "peer_address": { 00:15:20.580 "trtype": "TCP", 00:15:20.580 "adrfam": "IPv4", 00:15:20.580 "traddr": "10.0.0.1", 00:15:20.580 "trsvcid": "43754" 00:15:20.580 }, 00:15:20.580 "auth": { 00:15:20.580 "state": "completed", 00:15:20.580 "digest": "sha384", 00:15:20.580 "dhgroup": "ffdhe2048" 00:15:20.580 } 00:15:20.580 } 00:15:20.580 ]' 00:15:20.580 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.862 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.137 14:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:22.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.072 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.331 14:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:22.590 00:15:22.590 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.590 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.590 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.848 { 00:15:22.848 "cntlid": 61, 00:15:22.848 "qid": 0, 00:15:22.848 "state": "enabled", 00:15:22.848 "thread": "nvmf_tgt_poll_group_000", 00:15:22.848 "listen_address": { 00:15:22.848 "trtype": "TCP", 00:15:22.848 "adrfam": "IPv4", 00:15:22.848 "traddr": "10.0.0.2", 00:15:22.848 "trsvcid": "4420" 00:15:22.848 }, 00:15:22.848 "peer_address": { 00:15:22.848 "trtype": "TCP", 00:15:22.848 "adrfam": "IPv4", 00:15:22.848 "traddr": "10.0.0.1", 00:15:22.848 "trsvcid": "43780" 00:15:22.848 }, 00:15:22.848 "auth": { 00:15:22.848 "state": "completed", 00:15:22.848 "digest": "sha384", 00:15:22.848 "dhgroup": "ffdhe2048" 00:15:22.848 } 00:15:22.848 } 00:15:22.848 ]' 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.848 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.106 14:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.039 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.297 14:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:24.555 00:15:24.555 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.555 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.555 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.814 { 00:15:24.814 "cntlid": 63, 00:15:24.814 "qid": 0, 00:15:24.814 "state": "enabled", 00:15:24.814 "thread": "nvmf_tgt_poll_group_000", 00:15:24.814 "listen_address": { 00:15:24.814 "trtype": "TCP", 00:15:24.814 "adrfam": "IPv4", 00:15:24.814 "traddr": "10.0.0.2", 00:15:24.814 "trsvcid": "4420" 00:15:24.814 }, 00:15:24.814 "peer_address": { 00:15:24.814 "trtype": "TCP", 00:15:24.814 "adrfam": "IPv4", 00:15:24.814 "traddr": "10.0.0.1", 00:15:24.814 "trsvcid": "43792" 00:15:24.814 }, 00:15:24.814 "auth": { 00:15:24.814 "state": "completed", 00:15:24.814 "digest": "sha384", 00:15:24.814 "dhgroup": "ffdhe2048" 00:15:24.814 } 00:15:24.814 } 00:15:24.814 ]' 00:15:24.814 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.072 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.353 14:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.288 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.548 14:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:26.806 00:15:26.806 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.806 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.806 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.064 { 00:15:27.064 "cntlid": 65, 00:15:27.064 "qid": 0, 00:15:27.064 "state": "enabled", 00:15:27.064 "thread": "nvmf_tgt_poll_group_000", 00:15:27.064 "listen_address": { 00:15:27.064 "trtype": "TCP", 00:15:27.064 "adrfam": "IPv4", 00:15:27.064 "traddr": "10.0.0.2", 00:15:27.064 "trsvcid": "4420" 00:15:27.064 }, 00:15:27.064 "peer_address": { 00:15:27.064 "trtype": "TCP", 00:15:27.064 "adrfam": "IPv4", 00:15:27.064 "traddr": "10.0.0.1", 00:15:27.064 "trsvcid": "43824" 00:15:27.064 }, 00:15:27.064 "auth": { 00:15:27.064 "state": "completed", 00:15:27.064 "digest": "sha384", 00:15:27.064 "dhgroup": "ffdhe3072" 00:15:27.064 } 00:15:27.064 } 00:15:27.064 ]' 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.064 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.324 14:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.258 14:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.516 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:29.084 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.084 { 00:15:29.084 "cntlid": 67, 00:15:29.084 "qid": 0, 00:15:29.084 "state": "enabled", 00:15:29.084 "thread": "nvmf_tgt_poll_group_000", 00:15:29.084 "listen_address": { 00:15:29.084 "trtype": "TCP", 00:15:29.084 "adrfam": "IPv4", 00:15:29.084 "traddr": "10.0.0.2", 00:15:29.084 "trsvcid": "4420" 00:15:29.084 }, 00:15:29.084 "peer_address": { 00:15:29.084 "trtype": "TCP", 00:15:29.084 "adrfam": "IPv4", 00:15:29.084 "traddr": "10.0.0.1", 00:15:29.084 "trsvcid": "43856" 00:15:29.084 }, 00:15:29.084 "auth": { 00:15:29.084 "state": "completed", 00:15:29.084 "digest": "sha384", 00:15:29.084 "dhgroup": "ffdhe3072" 00:15:29.084 } 00:15:29.084 } 00:15:29.084 ]' 00:15:29.084 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.342 14:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.601 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.539 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.540 14:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.798 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:31.055 00:15:31.055 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:31.055 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:31.055 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.313 { 00:15:31.313 "cntlid": 69, 00:15:31.313 "qid": 0, 00:15:31.313 "state": "enabled", 00:15:31.313 "thread": "nvmf_tgt_poll_group_000", 00:15:31.313 "listen_address": { 00:15:31.313 "trtype": "TCP", 00:15:31.313 "adrfam": "IPv4", 00:15:31.313 "traddr": "10.0.0.2", 00:15:31.313 "trsvcid": "4420" 00:15:31.313 }, 00:15:31.313 "peer_address": { 00:15:31.313 "trtype": "TCP", 00:15:31.313 "adrfam": "IPv4", 00:15:31.313 "traddr": "10.0.0.1", 00:15:31.313 "trsvcid": "55098" 00:15:31.313 }, 00:15:31.313 "auth": { 00:15:31.313 "state": "completed", 00:15:31.313 "digest": "sha384", 00:15:31.313 "dhgroup": "ffdhe3072" 00:15:31.313 } 00:15:31.313 } 00:15:31.313 ]' 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.313 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.571 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.571 14:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.571 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.571 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.571 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.829 14:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:32.764 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.021 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:33.279 00:15:33.279 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.279 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.279 14:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.537 { 00:15:33.537 "cntlid": 71, 00:15:33.537 "qid": 0, 00:15:33.537 "state": "enabled", 00:15:33.537 "thread": "nvmf_tgt_poll_group_000", 00:15:33.537 "listen_address": { 00:15:33.537 "trtype": "TCP", 00:15:33.537 "adrfam": "IPv4", 00:15:33.537 "traddr": "10.0.0.2", 00:15:33.537 "trsvcid": "4420" 00:15:33.537 }, 00:15:33.537 "peer_address": { 00:15:33.537 "trtype": "TCP", 00:15:33.537 "adrfam": "IPv4", 00:15:33.537 "traddr": "10.0.0.1", 00:15:33.537 "trsvcid": "55128" 00:15:33.537 }, 00:15:33.537 "auth": { 00:15:33.537 "state": "completed", 00:15:33.537 "digest": "sha384", 00:15:33.537 "dhgroup": "ffdhe3072" 00:15:33.537 } 00:15:33.537 } 00:15:33.537 ]' 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.537 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.807 14:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.745 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.002 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.259 00:15:35.517 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.517 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.517 14:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.517 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.517 { 00:15:35.517 "cntlid": 73, 00:15:35.517 "qid": 0, 00:15:35.517 "state": "enabled", 00:15:35.517 "thread": "nvmf_tgt_poll_group_000", 00:15:35.517 "listen_address": { 00:15:35.517 "trtype": "TCP", 00:15:35.517 "adrfam": "IPv4", 00:15:35.517 "traddr": "10.0.0.2", 00:15:35.517 "trsvcid": "4420" 00:15:35.517 }, 00:15:35.517 "peer_address": { 00:15:35.517 "trtype": "TCP", 00:15:35.517 "adrfam": "IPv4", 00:15:35.517 "traddr": "10.0.0.1", 00:15:35.518 "trsvcid": "55144" 00:15:35.518 }, 00:15:35.518 "auth": { 00:15:35.518 "state": "completed", 00:15:35.518 "digest": "sha384", 00:15:35.518 "dhgroup": "ffdhe4096" 00:15:35.518 } 00:15:35.518 } 00:15:35.518 ]' 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.774 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.031 14:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:36.966 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.255 14:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.513 00:15:37.513 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.513 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.513 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.770 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.770 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.770 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.770 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.771 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.771 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.771 { 00:15:37.771 "cntlid": 75, 00:15:37.771 "qid": 0, 00:15:37.771 "state": "enabled", 00:15:37.771 "thread": "nvmf_tgt_poll_group_000", 00:15:37.771 "listen_address": { 00:15:37.771 "trtype": "TCP", 00:15:37.771 "adrfam": "IPv4", 00:15:37.771 "traddr": "10.0.0.2", 00:15:37.771 "trsvcid": "4420" 00:15:37.771 }, 00:15:37.771 "peer_address": { 00:15:37.771 "trtype": "TCP", 00:15:37.771 "adrfam": "IPv4", 00:15:37.771 "traddr": "10.0.0.1", 00:15:37.771 "trsvcid": "55168" 00:15:37.771 }, 00:15:37.771 "auth": { 00:15:37.771 "state": "completed", 00:15:37.771 "digest": "sha384", 00:15:37.771 "dhgroup": "ffdhe4096" 00:15:37.771 } 00:15:37.771 } 00:15:37.771 ]' 00:15:37.771 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.771 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.771 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.028 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:38.028 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.028 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.028 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.028 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.285 14:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.216 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:39.473 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:39.473 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.473 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.473 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.474 14:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.731 00:15:39.731 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.731 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.731 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.988 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.988 { 00:15:39.988 "cntlid": 77, 00:15:39.988 "qid": 0, 00:15:39.988 "state": "enabled", 00:15:39.988 "thread": "nvmf_tgt_poll_group_000", 00:15:39.988 "listen_address": { 00:15:39.988 "trtype": "TCP", 00:15:39.988 "adrfam": "IPv4", 00:15:39.988 "traddr": "10.0.0.2", 00:15:39.988 "trsvcid": "4420" 00:15:39.988 }, 00:15:39.988 "peer_address": { 00:15:39.988 "trtype": "TCP", 00:15:39.989 "adrfam": "IPv4", 00:15:39.989 "traddr": "10.0.0.1", 00:15:39.989 "trsvcid": "55192" 00:15:39.989 }, 00:15:39.989 "auth": { 00:15:39.989 "state": "completed", 00:15:39.989 "digest": "sha384", 00:15:39.989 "dhgroup": "ffdhe4096" 00:15:39.989 } 00:15:39.989 } 00:15:39.989 ]' 00:15:39.989 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.989 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.989 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.989 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:39.989 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.247 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.247 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.247 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.247 14:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.181 14:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.747 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:42.005 00:15:42.005 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.005 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.005 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.263 { 00:15:42.263 "cntlid": 79, 00:15:42.263 "qid": 0, 00:15:42.263 "state": "enabled", 00:15:42.263 "thread": "nvmf_tgt_poll_group_000", 00:15:42.263 "listen_address": { 00:15:42.263 "trtype": "TCP", 00:15:42.263 "adrfam": "IPv4", 00:15:42.263 "traddr": "10.0.0.2", 00:15:42.263 "trsvcid": "4420" 00:15:42.263 }, 00:15:42.263 "peer_address": { 00:15:42.263 "trtype": "TCP", 00:15:42.263 "adrfam": "IPv4", 00:15:42.263 "traddr": "10.0.0.1", 00:15:42.263 "trsvcid": "59898" 00:15:42.263 }, 00:15:42.263 "auth": { 00:15:42.263 "state": "completed", 00:15:42.263 "digest": "sha384", 00:15:42.263 "dhgroup": "ffdhe4096" 00:15:42.263 } 00:15:42.263 } 00:15:42.263 ]' 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.263 14:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.520 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:43.456 14:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.714 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.278 00:15:44.278 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.278 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.278 14:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.536 { 00:15:44.536 "cntlid": 81, 00:15:44.536 "qid": 0, 00:15:44.536 "state": "enabled", 00:15:44.536 "thread": "nvmf_tgt_poll_group_000", 00:15:44.536 "listen_address": { 00:15:44.536 "trtype": "TCP", 00:15:44.536 "adrfam": "IPv4", 00:15:44.536 "traddr": "10.0.0.2", 00:15:44.536 "trsvcid": "4420" 00:15:44.536 }, 00:15:44.536 "peer_address": { 00:15:44.536 "trtype": "TCP", 00:15:44.536 "adrfam": "IPv4", 00:15:44.536 "traddr": "10.0.0.1", 00:15:44.536 "trsvcid": "59922" 00:15:44.536 }, 00:15:44.536 "auth": { 00:15:44.536 "state": "completed", 00:15:44.536 "digest": "sha384", 00:15:44.536 "dhgroup": "ffdhe6144" 00:15:44.536 } 00:15:44.536 } 00:15:44.536 ]' 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.536 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.794 14:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.727 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.984 14:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:46.551 00:15:46.551 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.551 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.551 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.809 { 00:15:46.809 "cntlid": 83, 00:15:46.809 "qid": 0, 00:15:46.809 "state": "enabled", 00:15:46.809 "thread": "nvmf_tgt_poll_group_000", 00:15:46.809 "listen_address": { 00:15:46.809 "trtype": "TCP", 00:15:46.809 "adrfam": "IPv4", 00:15:46.809 "traddr": "10.0.0.2", 00:15:46.809 "trsvcid": "4420" 00:15:46.809 }, 00:15:46.809 "peer_address": { 00:15:46.809 "trtype": "TCP", 00:15:46.809 "adrfam": "IPv4", 00:15:46.809 "traddr": "10.0.0.1", 00:15:46.809 "trsvcid": "59940" 00:15:46.809 }, 00:15:46.809 "auth": { 00:15:46.809 "state": "completed", 00:15:46.809 "digest": "sha384", 00:15:46.809 "dhgroup": "ffdhe6144" 00:15:46.809 } 00:15:46.809 } 00:15:46.809 ]' 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:46.809 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.067 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.067 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.067 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.326 14:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.259 14:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.825 00:15:48.825 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.825 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.826 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.084 { 00:15:49.084 "cntlid": 85, 00:15:49.084 "qid": 0, 00:15:49.084 "state": "enabled", 00:15:49.084 "thread": "nvmf_tgt_poll_group_000", 00:15:49.084 "listen_address": { 00:15:49.084 "trtype": "TCP", 00:15:49.084 "adrfam": "IPv4", 00:15:49.084 "traddr": "10.0.0.2", 00:15:49.084 "trsvcid": "4420" 00:15:49.084 }, 00:15:49.084 "peer_address": { 00:15:49.084 "trtype": "TCP", 00:15:49.084 "adrfam": "IPv4", 00:15:49.084 "traddr": "10.0.0.1", 00:15:49.084 "trsvcid": "59974" 00:15:49.084 }, 00:15:49.084 "auth": { 00:15:49.084 "state": "completed", 00:15:49.084 "digest": "sha384", 00:15:49.084 "dhgroup": "ffdhe6144" 00:15:49.084 } 00:15:49.084 } 00:15:49.084 ]' 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:49.084 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.342 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.342 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.342 14:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.600 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.534 14:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.534 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.792 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.792 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.792 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:51.050 00:15:51.307 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:51.307 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:51.308 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.565 { 00:15:51.565 "cntlid": 87, 00:15:51.565 "qid": 0, 00:15:51.565 "state": "enabled", 00:15:51.565 "thread": "nvmf_tgt_poll_group_000", 00:15:51.565 "listen_address": { 00:15:51.565 "trtype": "TCP", 00:15:51.565 "adrfam": "IPv4", 00:15:51.565 "traddr": "10.0.0.2", 00:15:51.565 "trsvcid": "4420" 00:15:51.565 }, 00:15:51.565 "peer_address": { 00:15:51.565 "trtype": "TCP", 00:15:51.565 "adrfam": "IPv4", 00:15:51.565 "traddr": "10.0.0.1", 00:15:51.565 "trsvcid": "42084" 00:15:51.565 }, 00:15:51.565 "auth": { 00:15:51.565 "state": "completed", 00:15:51.565 "digest": "sha384", 00:15:51.565 "dhgroup": "ffdhe6144" 00:15:51.565 } 00:15:51.565 } 00:15:51.565 ]' 00:15:51.565 14:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.565 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.822 14:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.756 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.029 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.030 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.030 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.030 14:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.006 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.006 { 00:15:54.006 "cntlid": 89, 00:15:54.006 "qid": 0, 00:15:54.006 "state": "enabled", 00:15:54.006 "thread": "nvmf_tgt_poll_group_000", 00:15:54.006 "listen_address": { 00:15:54.006 "trtype": "TCP", 00:15:54.006 "adrfam": "IPv4", 00:15:54.006 "traddr": "10.0.0.2", 00:15:54.006 "trsvcid": "4420" 00:15:54.006 }, 00:15:54.006 "peer_address": { 00:15:54.006 "trtype": "TCP", 00:15:54.006 "adrfam": "IPv4", 00:15:54.006 "traddr": "10.0.0.1", 00:15:54.006 "trsvcid": "42108" 00:15:54.006 }, 00:15:54.006 "auth": { 00:15:54.006 "state": "completed", 00:15:54.006 "digest": "sha384", 00:15:54.006 "dhgroup": "ffdhe8192" 00:15:54.006 } 00:15:54.006 } 00:15:54.006 ]' 00:15:54.006 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.263 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.521 14:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.454 14:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.712 14:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.645 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.645 { 00:15:56.645 "cntlid": 91, 00:15:56.645 "qid": 0, 00:15:56.645 "state": "enabled", 00:15:56.645 "thread": "nvmf_tgt_poll_group_000", 00:15:56.645 "listen_address": { 00:15:56.645 "trtype": "TCP", 00:15:56.645 "adrfam": "IPv4", 00:15:56.645 "traddr": "10.0.0.2", 00:15:56.645 "trsvcid": "4420" 00:15:56.645 }, 00:15:56.645 "peer_address": { 00:15:56.645 "trtype": "TCP", 00:15:56.645 "adrfam": "IPv4", 00:15:56.645 "traddr": "10.0.0.1", 00:15:56.645 "trsvcid": "42134" 00:15:56.645 }, 00:15:56.645 "auth": { 00:15:56.645 "state": "completed", 00:15:56.645 "digest": "sha384", 00:15:56.645 "dhgroup": "ffdhe8192" 00:15:56.645 } 00:15:56.645 } 00:15:56.645 ]' 00:15:56.645 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.903 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.161 14:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.092 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.350 14:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.915 00:15:59.173 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.173 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.173 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.173 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.431 { 00:15:59.431 "cntlid": 93, 00:15:59.431 "qid": 0, 00:15:59.431 "state": "enabled", 00:15:59.431 "thread": "nvmf_tgt_poll_group_000", 00:15:59.431 "listen_address": { 00:15:59.431 "trtype": "TCP", 00:15:59.431 "adrfam": "IPv4", 00:15:59.431 "traddr": "10.0.0.2", 00:15:59.431 "trsvcid": "4420" 00:15:59.431 }, 00:15:59.431 "peer_address": { 00:15:59.431 "trtype": "TCP", 00:15:59.431 "adrfam": "IPv4", 00:15:59.431 "traddr": "10.0.0.1", 00:15:59.431 "trsvcid": "42178" 00:15:59.431 }, 00:15:59.431 "auth": { 00:15:59.431 "state": "completed", 00:15:59.431 "digest": "sha384", 00:15:59.431 "dhgroup": "ffdhe8192" 00:15:59.431 } 00:15:59.431 } 00:15:59.431 ]' 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.431 14:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.689 14:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.622 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.880 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:00.880 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.881 14:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.813 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.813 { 00:16:01.813 "cntlid": 95, 00:16:01.813 "qid": 0, 00:16:01.813 "state": "enabled", 00:16:01.813 "thread": "nvmf_tgt_poll_group_000", 00:16:01.813 "listen_address": { 00:16:01.813 "trtype": "TCP", 00:16:01.813 "adrfam": "IPv4", 00:16:01.813 "traddr": "10.0.0.2", 00:16:01.813 "trsvcid": "4420" 00:16:01.813 }, 00:16:01.813 "peer_address": { 00:16:01.813 "trtype": "TCP", 00:16:01.813 "adrfam": "IPv4", 00:16:01.813 "traddr": "10.0.0.1", 00:16:01.813 "trsvcid": "44996" 00:16:01.813 }, 00:16:01.813 "auth": { 00:16:01.813 "state": "completed", 00:16:01.813 "digest": "sha384", 00:16:01.813 "dhgroup": "ffdhe8192" 00:16:01.813 } 00:16:01.813 } 00:16:01.813 ]' 00:16:01.813 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.071 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.328 14:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.262 14:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.520 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.778 00:16:03.778 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.778 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.778 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.036 { 00:16:04.036 "cntlid": 97, 00:16:04.036 "qid": 0, 00:16:04.036 "state": "enabled", 00:16:04.036 "thread": "nvmf_tgt_poll_group_000", 00:16:04.036 "listen_address": { 00:16:04.036 "trtype": "TCP", 00:16:04.036 "adrfam": "IPv4", 00:16:04.036 "traddr": "10.0.0.2", 00:16:04.036 "trsvcid": "4420" 00:16:04.036 }, 00:16:04.036 "peer_address": { 00:16:04.036 "trtype": "TCP", 00:16:04.036 "adrfam": "IPv4", 00:16:04.036 "traddr": "10.0.0.1", 00:16:04.036 "trsvcid": "45020" 00:16:04.036 }, 00:16:04.036 "auth": { 00:16:04.036 "state": "completed", 00:16:04.036 "digest": "sha512", 00:16:04.036 "dhgroup": "null" 00:16:04.036 } 00:16:04.036 } 00:16:04.036 ]' 00:16:04.036 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.294 14:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.551 14:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.484 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.744 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.310 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.310 { 00:16:06.310 "cntlid": 99, 00:16:06.310 "qid": 0, 00:16:06.310 "state": "enabled", 00:16:06.310 "thread": "nvmf_tgt_poll_group_000", 00:16:06.310 "listen_address": { 00:16:06.310 "trtype": "TCP", 00:16:06.310 "adrfam": "IPv4", 00:16:06.310 "traddr": "10.0.0.2", 00:16:06.310 "trsvcid": "4420" 00:16:06.310 }, 00:16:06.310 "peer_address": { 00:16:06.310 "trtype": "TCP", 00:16:06.310 "adrfam": "IPv4", 00:16:06.310 "traddr": "10.0.0.1", 00:16:06.310 "trsvcid": "45050" 00:16:06.310 }, 00:16:06.310 "auth": { 00:16:06.310 "state": "completed", 00:16:06.310 "digest": "sha512", 00:16:06.310 "dhgroup": "null" 00:16:06.310 } 00:16:06.310 } 00:16:06.310 ]' 00:16:06.310 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.568 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.568 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.568 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:06.568 14:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.568 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.568 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.568 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.825 14:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.758 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.016 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.274 00:16:08.274 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.274 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.274 14:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.532 { 00:16:08.532 "cntlid": 101, 00:16:08.532 "qid": 0, 00:16:08.532 "state": "enabled", 00:16:08.532 "thread": "nvmf_tgt_poll_group_000", 00:16:08.532 "listen_address": { 00:16:08.532 "trtype": "TCP", 00:16:08.532 "adrfam": "IPv4", 00:16:08.532 "traddr": "10.0.0.2", 00:16:08.532 "trsvcid": "4420" 00:16:08.532 }, 00:16:08.532 "peer_address": { 00:16:08.532 "trtype": "TCP", 00:16:08.532 "adrfam": "IPv4", 00:16:08.532 "traddr": "10.0.0.1", 00:16:08.532 "trsvcid": "45068" 00:16:08.532 }, 00:16:08.532 "auth": { 00:16:08.532 "state": "completed", 00:16:08.532 "digest": "sha512", 00:16:08.532 "dhgroup": "null" 00:16:08.532 } 00:16:08.532 } 00:16:08.532 ]' 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.532 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.790 14:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.749 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.749 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.007 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:10.572 00:16:10.572 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.572 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.572 14:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.572 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.572 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.572 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.572 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.828 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.829 { 00:16:10.829 "cntlid": 103, 00:16:10.829 "qid": 0, 00:16:10.829 "state": "enabled", 00:16:10.829 "thread": "nvmf_tgt_poll_group_000", 00:16:10.829 "listen_address": { 00:16:10.829 "trtype": "TCP", 00:16:10.829 "adrfam": "IPv4", 00:16:10.829 "traddr": "10.0.0.2", 00:16:10.829 "trsvcid": "4420" 00:16:10.829 }, 00:16:10.829 "peer_address": { 00:16:10.829 "trtype": "TCP", 00:16:10.829 "adrfam": "IPv4", 00:16:10.829 "traddr": "10.0.0.1", 00:16:10.829 "trsvcid": "34324" 00:16:10.829 }, 00:16:10.829 "auth": { 00:16:10.829 "state": "completed", 00:16:10.829 "digest": "sha512", 00:16:10.829 "dhgroup": "null" 00:16:10.829 } 00:16:10.829 } 00:16:10.829 ]' 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.829 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.085 14:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.017 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.275 14:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.532 00:16:12.532 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.532 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.532 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.789 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.789 { 00:16:12.789 "cntlid": 105, 00:16:12.789 "qid": 0, 00:16:12.789 "state": "enabled", 00:16:12.789 "thread": "nvmf_tgt_poll_group_000", 00:16:12.789 "listen_address": { 00:16:12.789 "trtype": "TCP", 00:16:12.789 "adrfam": "IPv4", 00:16:12.789 "traddr": "10.0.0.2", 00:16:12.789 "trsvcid": "4420" 00:16:12.789 }, 00:16:12.789 "peer_address": { 00:16:12.789 "trtype": "TCP", 00:16:12.789 "adrfam": "IPv4", 00:16:12.790 "traddr": "10.0.0.1", 00:16:12.790 "trsvcid": "34360" 00:16:12.790 }, 00:16:12.790 "auth": { 00:16:12.790 "state": "completed", 00:16:12.790 "digest": "sha512", 00:16:12.790 "dhgroup": "ffdhe2048" 00:16:12.790 } 00:16:12.790 } 00:16:12.790 ]' 00:16:12.790 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.790 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.790 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.790 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.790 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.047 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.047 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.047 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.304 14:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.234 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.490 14:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.746 00:16:14.746 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.746 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.746 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.003 { 00:16:15.003 "cntlid": 107, 00:16:15.003 "qid": 0, 00:16:15.003 "state": "enabled", 00:16:15.003 "thread": "nvmf_tgt_poll_group_000", 00:16:15.003 "listen_address": { 00:16:15.003 "trtype": "TCP", 00:16:15.003 "adrfam": "IPv4", 00:16:15.003 "traddr": "10.0.0.2", 00:16:15.003 "trsvcid": "4420" 00:16:15.003 }, 00:16:15.003 "peer_address": { 00:16:15.003 "trtype": "TCP", 00:16:15.003 "adrfam": "IPv4", 00:16:15.003 "traddr": "10.0.0.1", 00:16:15.003 "trsvcid": "34390" 00:16:15.003 }, 00:16:15.003 "auth": { 00:16:15.003 "state": "completed", 00:16:15.003 "digest": "sha512", 00:16:15.003 "dhgroup": "ffdhe2048" 00:16:15.003 } 00:16:15.003 } 00:16:15.003 ]' 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.003 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.259 14:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.190 14:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.448 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.013 00:16:17.013 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.013 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.013 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.270 { 00:16:17.270 "cntlid": 109, 00:16:17.270 "qid": 0, 00:16:17.270 "state": "enabled", 00:16:17.270 "thread": "nvmf_tgt_poll_group_000", 00:16:17.270 "listen_address": { 00:16:17.270 "trtype": "TCP", 00:16:17.270 "adrfam": "IPv4", 00:16:17.270 "traddr": "10.0.0.2", 00:16:17.270 "trsvcid": "4420" 00:16:17.270 }, 00:16:17.270 "peer_address": { 00:16:17.270 "trtype": "TCP", 00:16:17.270 "adrfam": "IPv4", 00:16:17.270 "traddr": "10.0.0.1", 00:16:17.270 "trsvcid": "34424" 00:16:17.270 }, 00:16:17.270 "auth": { 00:16:17.270 "state": "completed", 00:16:17.270 "digest": "sha512", 00:16:17.270 "dhgroup": "ffdhe2048" 00:16:17.270 } 00:16:17.270 } 00:16:17.270 ]' 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.270 14:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.528 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:18.459 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.460 14:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.717 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:18.975 00:16:18.975 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.975 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.975 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.232 { 00:16:19.232 "cntlid": 111, 00:16:19.232 "qid": 0, 00:16:19.232 "state": "enabled", 00:16:19.232 "thread": "nvmf_tgt_poll_group_000", 00:16:19.232 "listen_address": { 00:16:19.232 "trtype": "TCP", 00:16:19.232 "adrfam": "IPv4", 00:16:19.232 "traddr": "10.0.0.2", 00:16:19.232 "trsvcid": "4420" 00:16:19.232 }, 00:16:19.232 "peer_address": { 00:16:19.232 "trtype": "TCP", 00:16:19.232 "adrfam": "IPv4", 00:16:19.232 "traddr": "10.0.0.1", 00:16:19.232 "trsvcid": "34450" 00:16:19.232 }, 00:16:19.232 "auth": { 00:16:19.232 "state": "completed", 00:16:19.232 "digest": "sha512", 00:16:19.232 "dhgroup": "ffdhe2048" 00:16:19.232 } 00:16:19.232 } 00:16:19.232 ]' 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.232 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.490 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.490 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.490 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.490 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.490 14:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.748 14:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.682 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.940 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.198 00:16:21.198 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.198 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.198 14:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.456 { 00:16:21.456 "cntlid": 113, 00:16:21.456 "qid": 0, 00:16:21.456 "state": "enabled", 00:16:21.456 "thread": "nvmf_tgt_poll_group_000", 00:16:21.456 "listen_address": { 00:16:21.456 "trtype": "TCP", 00:16:21.456 "adrfam": "IPv4", 00:16:21.456 "traddr": "10.0.0.2", 00:16:21.456 "trsvcid": "4420" 00:16:21.456 }, 00:16:21.456 "peer_address": { 00:16:21.456 "trtype": "TCP", 00:16:21.456 "adrfam": "IPv4", 00:16:21.456 "traddr": "10.0.0.1", 00:16:21.456 "trsvcid": "40732" 00:16:21.456 }, 00:16:21.456 "auth": { 00:16:21.456 "state": "completed", 00:16:21.456 "digest": "sha512", 00:16:21.456 "dhgroup": "ffdhe3072" 00:16:21.456 } 00:16:21.456 } 00:16:21.456 ]' 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.456 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.714 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.714 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.714 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.714 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.714 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.971 14:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.904 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.904 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.161 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.419 00:16:23.419 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.419 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.419 14:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.676 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.676 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.677 { 00:16:23.677 "cntlid": 115, 00:16:23.677 "qid": 0, 00:16:23.677 "state": "enabled", 00:16:23.677 "thread": "nvmf_tgt_poll_group_000", 00:16:23.677 "listen_address": { 00:16:23.677 "trtype": "TCP", 00:16:23.677 "adrfam": "IPv4", 00:16:23.677 "traddr": "10.0.0.2", 00:16:23.677 "trsvcid": "4420" 00:16:23.677 }, 00:16:23.677 "peer_address": { 00:16:23.677 "trtype": "TCP", 00:16:23.677 "adrfam": "IPv4", 00:16:23.677 "traddr": "10.0.0.1", 00:16:23.677 "trsvcid": "40764" 00:16:23.677 }, 00:16:23.677 "auth": { 00:16:23.677 "state": "completed", 00:16:23.677 "digest": "sha512", 00:16:23.677 "dhgroup": "ffdhe3072" 00:16:23.677 } 00:16:23.677 } 00:16:23.677 ]' 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.677 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.934 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.934 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.934 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.191 14:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:25.122 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.122 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.122 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.122 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.123 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.123 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.123 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.381 14:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.646 00:16:25.646 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.646 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.646 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.947 { 00:16:25.947 "cntlid": 117, 00:16:25.947 "qid": 0, 00:16:25.947 "state": "enabled", 00:16:25.947 "thread": "nvmf_tgt_poll_group_000", 00:16:25.947 "listen_address": { 00:16:25.947 "trtype": "TCP", 00:16:25.947 "adrfam": "IPv4", 00:16:25.947 "traddr": "10.0.0.2", 00:16:25.947 "trsvcid": "4420" 00:16:25.947 }, 00:16:25.947 "peer_address": { 00:16:25.947 "trtype": "TCP", 00:16:25.947 "adrfam": "IPv4", 00:16:25.947 "traddr": "10.0.0.1", 00:16:25.947 "trsvcid": "40800" 00:16:25.947 }, 00:16:25.947 "auth": { 00:16:25.947 "state": "completed", 00:16:25.947 "digest": "sha512", 00:16:25.947 "dhgroup": "ffdhe3072" 00:16:25.947 } 00:16:25.947 } 00:16:25.947 ]' 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.947 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.523 14:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.456 14:17:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:27.456 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.022 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.022 { 00:16:28.022 "cntlid": 119, 00:16:28.022 "qid": 0, 00:16:28.022 "state": "enabled", 00:16:28.022 "thread": "nvmf_tgt_poll_group_000", 00:16:28.022 "listen_address": { 00:16:28.022 "trtype": "TCP", 00:16:28.022 "adrfam": "IPv4", 00:16:28.022 "traddr": "10.0.0.2", 00:16:28.022 "trsvcid": "4420" 00:16:28.022 }, 00:16:28.022 "peer_address": { 00:16:28.022 "trtype": "TCP", 00:16:28.022 "adrfam": "IPv4", 00:16:28.022 "traddr": "10.0.0.1", 00:16:28.022 "trsvcid": "40836" 00:16:28.022 }, 00:16:28.022 "auth": { 00:16:28.022 "state": "completed", 00:16:28.022 "digest": "sha512", 00:16:28.022 "dhgroup": "ffdhe3072" 00:16:28.022 } 00:16:28.022 } 00:16:28.022 ]' 00:16:28.022 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.280 14:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.537 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.473 14:17:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.473 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.039 00:16:30.039 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.039 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.039 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.297 { 00:16:30.297 "cntlid": 121, 00:16:30.297 "qid": 0, 00:16:30.297 "state": "enabled", 00:16:30.297 "thread": "nvmf_tgt_poll_group_000", 00:16:30.297 "listen_address": { 00:16:30.297 "trtype": "TCP", 00:16:30.297 "adrfam": "IPv4", 00:16:30.297 "traddr": "10.0.0.2", 00:16:30.297 "trsvcid": "4420" 00:16:30.297 }, 00:16:30.297 "peer_address": { 00:16:30.297 "trtype": "TCP", 00:16:30.297 "adrfam": "IPv4", 00:16:30.297 "traddr": "10.0.0.1", 00:16:30.297 "trsvcid": "38726" 00:16:30.297 }, 00:16:30.297 "auth": { 00:16:30.297 "state": "completed", 00:16:30.297 "digest": "sha512", 00:16:30.297 "dhgroup": "ffdhe4096" 00:16:30.297 } 00:16:30.297 } 00:16:30.297 ]' 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.297 14:17:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.555 14:18:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.490 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.748 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.749 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.314 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.314 { 00:16:32.314 "cntlid": 123, 00:16:32.314 "qid": 0, 00:16:32.314 "state": "enabled", 00:16:32.314 "thread": "nvmf_tgt_poll_group_000", 00:16:32.314 "listen_address": { 00:16:32.314 "trtype": "TCP", 00:16:32.314 "adrfam": "IPv4", 00:16:32.314 "traddr": "10.0.0.2", 00:16:32.314 "trsvcid": "4420" 00:16:32.314 }, 00:16:32.314 "peer_address": { 00:16:32.314 "trtype": "TCP", 00:16:32.314 "adrfam": "IPv4", 00:16:32.314 "traddr": "10.0.0.1", 00:16:32.314 "trsvcid": "38746" 00:16:32.314 }, 00:16:32.314 "auth": { 00:16:32.314 "state": "completed", 00:16:32.314 "digest": "sha512", 00:16:32.314 "dhgroup": "ffdhe4096" 00:16:32.314 } 00:16:32.314 } 00:16:32.314 ]' 00:16:32.314 14:18:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.572 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.831 14:18:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.769 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.026 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.284 00:16:34.284 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.284 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.284 14:18:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.542 { 00:16:34.542 "cntlid": 125, 00:16:34.542 "qid": 0, 00:16:34.542 "state": "enabled", 00:16:34.542 "thread": "nvmf_tgt_poll_group_000", 00:16:34.542 "listen_address": { 00:16:34.542 "trtype": "TCP", 00:16:34.542 "adrfam": "IPv4", 00:16:34.542 "traddr": "10.0.0.2", 00:16:34.542 "trsvcid": "4420" 00:16:34.542 }, 00:16:34.542 "peer_address": { 00:16:34.542 "trtype": "TCP", 00:16:34.542 "adrfam": "IPv4", 00:16:34.542 "traddr": "10.0.0.1", 00:16:34.542 "trsvcid": "38770" 00:16:34.542 }, 00:16:34.542 "auth": { 00:16:34.542 "state": "completed", 00:16:34.542 "digest": "sha512", 00:16:34.542 "dhgroup": "ffdhe4096" 00:16:34.542 } 00:16:34.542 } 00:16:34.542 ]' 00:16:34.542 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.801 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.058 14:18:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:35.994 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.253 14:18:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.511 00:16:36.511 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.511 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.511 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.769 { 00:16:36.769 "cntlid": 127, 00:16:36.769 "qid": 0, 00:16:36.769 "state": "enabled", 00:16:36.769 "thread": "nvmf_tgt_poll_group_000", 00:16:36.769 "listen_address": { 00:16:36.769 "trtype": "TCP", 00:16:36.769 "adrfam": "IPv4", 00:16:36.769 "traddr": "10.0.0.2", 00:16:36.769 "trsvcid": "4420" 00:16:36.769 }, 00:16:36.769 "peer_address": { 00:16:36.769 "trtype": "TCP", 00:16:36.769 "adrfam": "IPv4", 00:16:36.769 "traddr": "10.0.0.1", 00:16:36.769 "trsvcid": "38788" 00:16:36.769 }, 00:16:36.769 "auth": { 00:16:36.769 "state": "completed", 00:16:36.769 "digest": "sha512", 00:16:36.769 "dhgroup": "ffdhe4096" 00:16:36.769 } 00:16:36.769 } 00:16:36.769 ]' 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.769 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:37.027 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.027 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:37.027 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.027 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.027 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.284 14:18:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.220 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.478 14:18:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.047 00:16:39.047 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.047 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.047 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.305 { 00:16:39.305 "cntlid": 129, 00:16:39.305 "qid": 0, 00:16:39.305 "state": "enabled", 00:16:39.305 "thread": "nvmf_tgt_poll_group_000", 00:16:39.305 "listen_address": { 00:16:39.305 "trtype": "TCP", 00:16:39.305 "adrfam": "IPv4", 00:16:39.305 "traddr": "10.0.0.2", 00:16:39.305 "trsvcid": "4420" 00:16:39.305 }, 00:16:39.305 "peer_address": { 00:16:39.305 "trtype": "TCP", 00:16:39.305 "adrfam": "IPv4", 00:16:39.305 "traddr": "10.0.0.1", 00:16:39.305 "trsvcid": "38798" 00:16:39.305 }, 00:16:39.305 "auth": { 00:16:39.305 "state": "completed", 00:16:39.305 "digest": "sha512", 00:16:39.305 "dhgroup": "ffdhe6144" 00:16:39.305 } 00:16:39.305 } 00:16:39.305 ]' 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.305 14:18:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.563 14:18:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.499 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.499 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.757 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.323 00:16:41.323 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.323 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.323 14:18:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.581 { 00:16:41.581 "cntlid": 131, 00:16:41.581 "qid": 0, 00:16:41.581 "state": "enabled", 00:16:41.581 "thread": "nvmf_tgt_poll_group_000", 00:16:41.581 "listen_address": { 00:16:41.581 "trtype": "TCP", 00:16:41.581 "adrfam": "IPv4", 00:16:41.581 "traddr": "10.0.0.2", 00:16:41.581 "trsvcid": "4420" 00:16:41.581 }, 00:16:41.581 "peer_address": { 00:16:41.581 "trtype": "TCP", 00:16:41.581 "adrfam": "IPv4", 00:16:41.581 "traddr": "10.0.0.1", 00:16:41.581 "trsvcid": "47306" 00:16:41.581 }, 00:16:41.581 "auth": { 00:16:41.581 "state": "completed", 00:16:41.581 "digest": "sha512", 00:16:41.581 "dhgroup": "ffdhe6144" 00:16:41.581 } 00:16:41.581 } 00:16:41.581 ]' 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.581 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.841 14:18:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:42.803 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.061 14:18:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.629 00:16:43.629 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.629 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.629 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.886 { 00:16:43.886 "cntlid": 133, 00:16:43.886 "qid": 0, 00:16:43.886 "state": "enabled", 00:16:43.886 "thread": "nvmf_tgt_poll_group_000", 00:16:43.886 "listen_address": { 00:16:43.886 "trtype": "TCP", 00:16:43.886 "adrfam": "IPv4", 00:16:43.886 "traddr": "10.0.0.2", 00:16:43.886 "trsvcid": "4420" 00:16:43.886 }, 00:16:43.886 "peer_address": { 00:16:43.886 "trtype": "TCP", 00:16:43.886 "adrfam": "IPv4", 00:16:43.886 "traddr": "10.0.0.1", 00:16:43.886 "trsvcid": "47340" 00:16:43.886 }, 00:16:43.886 "auth": { 00:16:43.886 "state": "completed", 00:16:43.886 "digest": "sha512", 00:16:43.886 "dhgroup": "ffdhe6144" 00:16:43.886 } 00:16:43.886 } 00:16:43.886 ]' 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.886 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.143 14:18:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.081 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.338 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.339 14:18:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:45.905 00:16:45.905 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.905 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.905 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.163 { 00:16:46.163 "cntlid": 135, 00:16:46.163 "qid": 0, 00:16:46.163 "state": "enabled", 00:16:46.163 "thread": "nvmf_tgt_poll_group_000", 00:16:46.163 "listen_address": { 00:16:46.163 "trtype": "TCP", 00:16:46.163 "adrfam": "IPv4", 00:16:46.163 "traddr": "10.0.0.2", 00:16:46.163 "trsvcid": "4420" 00:16:46.163 }, 00:16:46.163 "peer_address": { 00:16:46.163 "trtype": "TCP", 00:16:46.163 "adrfam": "IPv4", 00:16:46.163 "traddr": "10.0.0.1", 00:16:46.163 "trsvcid": "47354" 00:16:46.163 }, 00:16:46.163 "auth": { 00:16:46.163 "state": "completed", 00:16:46.163 "digest": "sha512", 00:16:46.163 "dhgroup": "ffdhe6144" 00:16:46.163 } 00:16:46.163 } 00:16:46.163 ]' 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.163 14:18:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.422 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.358 14:18:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.616 14:18:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.553 00:16:48.553 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.553 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.553 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.811 { 00:16:48.811 "cntlid": 137, 00:16:48.811 "qid": 0, 00:16:48.811 "state": "enabled", 00:16:48.811 "thread": "nvmf_tgt_poll_group_000", 00:16:48.811 "listen_address": { 00:16:48.811 "trtype": "TCP", 00:16:48.811 "adrfam": "IPv4", 00:16:48.811 "traddr": "10.0.0.2", 00:16:48.811 "trsvcid": "4420" 00:16:48.811 }, 00:16:48.811 "peer_address": { 00:16:48.811 "trtype": "TCP", 00:16:48.811 "adrfam": "IPv4", 00:16:48.811 "traddr": "10.0.0.1", 00:16:48.811 "trsvcid": "47394" 00:16:48.811 }, 00:16:48.811 "auth": { 00:16:48.811 "state": "completed", 00:16:48.811 "digest": "sha512", 00:16:48.811 "dhgroup": "ffdhe8192" 00:16:48.811 } 00:16:48.811 } 00:16:48.811 ]' 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.811 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.069 14:18:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.004 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.262 14:18:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.198 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.198 { 00:16:51.198 "cntlid": 139, 00:16:51.198 "qid": 0, 00:16:51.198 "state": "enabled", 00:16:51.198 "thread": "nvmf_tgt_poll_group_000", 00:16:51.198 "listen_address": { 00:16:51.198 "trtype": "TCP", 00:16:51.198 "adrfam": "IPv4", 00:16:51.198 "traddr": "10.0.0.2", 00:16:51.198 "trsvcid": "4420" 00:16:51.198 }, 00:16:51.198 "peer_address": { 00:16:51.198 "trtype": "TCP", 00:16:51.198 "adrfam": "IPv4", 00:16:51.198 "traddr": "10.0.0.1", 00:16:51.198 "trsvcid": "34484" 00:16:51.198 }, 00:16:51.198 "auth": { 00:16:51.198 "state": "completed", 00:16:51.198 "digest": "sha512", 00:16:51.198 "dhgroup": "ffdhe8192" 00:16:51.198 } 00:16:51.198 } 00:16:51.198 ]' 00:16:51.198 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.456 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:51.456 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.456 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:51.456 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.457 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.457 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.457 14:18:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.714 14:18:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:MWIxOGI1Y2Y1OTY0NmM3YjE5NzM1OTZhMDY0MDk0OTJRCBZy: --dhchap-ctrl-secret DHHC-1:02:YWIzY2VlOGIyNmQ2NTUxZTRmN2JiMGE4NjEwZmViOWMwM2M0NTExMTI5MjFjY2Q4Cc5pjw==: 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.650 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.908 14:18:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.844 00:16:53.844 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.844 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.844 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.101 { 00:16:54.101 "cntlid": 141, 00:16:54.101 "qid": 0, 00:16:54.101 "state": "enabled", 00:16:54.101 "thread": "nvmf_tgt_poll_group_000", 00:16:54.101 "listen_address": { 00:16:54.101 "trtype": "TCP", 00:16:54.101 "adrfam": "IPv4", 00:16:54.101 "traddr": "10.0.0.2", 00:16:54.101 "trsvcid": "4420" 00:16:54.101 }, 00:16:54.101 "peer_address": { 00:16:54.101 "trtype": "TCP", 00:16:54.101 "adrfam": "IPv4", 00:16:54.101 "traddr": "10.0.0.1", 00:16:54.101 "trsvcid": "34510" 00:16:54.101 }, 00:16:54.101 "auth": { 00:16:54.101 "state": "completed", 00:16:54.101 "digest": "sha512", 00:16:54.101 "dhgroup": "ffdhe8192" 00:16:54.101 } 00:16:54.101 } 00:16:54.101 ]' 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.101 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.360 14:18:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:NmJhYzg4YTNjOTY4ZjczZjFjNjk1NDFjZjNmZDRhMThhNzMwZDQ0ZGIyNDg4Yjk0eJ/n8g==: --dhchap-ctrl-secret DHHC-1:01:MDFhYzM3NGRiZTkwMzk3YjdjYjRhY2UwZjc0MDBkMDVGxHBZ: 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.295 14:18:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.553 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.490 00:16:56.490 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.490 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.490 14:18:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.748 { 00:16:56.748 "cntlid": 143, 00:16:56.748 "qid": 0, 00:16:56.748 "state": "enabled", 00:16:56.748 "thread": "nvmf_tgt_poll_group_000", 00:16:56.748 "listen_address": { 00:16:56.748 "trtype": "TCP", 00:16:56.748 "adrfam": "IPv4", 00:16:56.748 "traddr": "10.0.0.2", 00:16:56.748 "trsvcid": "4420" 00:16:56.748 }, 00:16:56.748 "peer_address": { 00:16:56.748 "trtype": "TCP", 00:16:56.748 "adrfam": "IPv4", 00:16:56.748 "traddr": "10.0.0.1", 00:16:56.748 "trsvcid": "34518" 00:16:56.748 }, 00:16:56.748 "auth": { 00:16:56.748 "state": "completed", 00:16:56.748 "digest": "sha512", 00:16:56.748 "dhgroup": "ffdhe8192" 00:16:56.748 } 00:16:56.748 } 00:16:56.748 ]' 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.748 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.005 14:18:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:57.940 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.237 14:18:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.193 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.193 { 00:16:59.193 "cntlid": 145, 00:16:59.193 "qid": 0, 00:16:59.193 "state": "enabled", 00:16:59.193 "thread": "nvmf_tgt_poll_group_000", 00:16:59.193 "listen_address": { 00:16:59.193 "trtype": "TCP", 00:16:59.193 "adrfam": "IPv4", 00:16:59.193 "traddr": "10.0.0.2", 00:16:59.193 "trsvcid": "4420" 00:16:59.193 }, 00:16:59.193 "peer_address": { 00:16:59.193 "trtype": "TCP", 00:16:59.193 "adrfam": "IPv4", 00:16:59.193 "traddr": "10.0.0.1", 00:16:59.193 "trsvcid": "34552" 00:16:59.193 }, 00:16:59.193 "auth": { 00:16:59.193 "state": "completed", 00:16:59.193 "digest": "sha512", 00:16:59.193 "dhgroup": "ffdhe8192" 00:16:59.193 } 00:16:59.193 } 00:16:59.193 ]' 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.193 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.469 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.470 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.470 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.470 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.470 14:18:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.728 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MTU0Y2MxYzNhMmVkZDhkMzI0MGUyNGMxNzQ5ZmM2ZTYzMDg1NWJlOTk2YTg0YmQ2OmTjDA==: --dhchap-ctrl-secret DHHC-1:03:ODQwMjMyNzNmODIwYjhlYjlmNTZmN2I4YzZkY2VkMTMxMjRhMmEyNDFiMGM5NTg0YjVhY2VjMzhmY2JiYzYyY84BxAY=: 00:17:00.666 14:18:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:00.666 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:01.232 request: 00:17:01.232 { 00:17:01.232 "name": "nvme0", 00:17:01.232 "trtype": "tcp", 00:17:01.232 "traddr": "10.0.0.2", 00:17:01.232 "adrfam": "ipv4", 00:17:01.232 "trsvcid": "4420", 00:17:01.232 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:01.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:01.232 "prchk_reftag": false, 00:17:01.232 "prchk_guard": false, 00:17:01.232 "hdgst": false, 00:17:01.232 "ddgst": false, 00:17:01.232 "dhchap_key": "key2", 00:17:01.232 "method": "bdev_nvme_attach_controller", 00:17:01.232 "req_id": 1 00:17:01.232 } 00:17:01.232 Got JSON-RPC error response 00:17:01.232 response: 00:17:01.232 { 00:17:01.232 "code": -5, 00:17:01.232 "message": "Input/output error" 00:17:01.232 } 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.232 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:01.233 14:18:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.170 request: 00:17:02.170 { 00:17:02.170 "name": "nvme0", 00:17:02.170 "trtype": "tcp", 00:17:02.170 "traddr": "10.0.0.2", 00:17:02.170 "adrfam": "ipv4", 00:17:02.170 "trsvcid": "4420", 00:17:02.170 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.170 "prchk_reftag": false, 00:17:02.170 "prchk_guard": false, 00:17:02.170 "hdgst": false, 00:17:02.170 "ddgst": false, 00:17:02.170 "dhchap_key": "key1", 00:17:02.170 "dhchap_ctrlr_key": "ckey2", 00:17:02.170 "method": "bdev_nvme_attach_controller", 00:17:02.170 "req_id": 1 00:17:02.170 } 00:17:02.170 Got JSON-RPC error response 00:17:02.170 response: 00:17:02.170 { 00:17:02.170 "code": -5, 00:17:02.170 "message": "Input/output error" 00:17:02.170 } 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.171 14:18:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.740 request: 00:17:02.740 { 00:17:02.740 "name": "nvme0", 00:17:02.740 "trtype": "tcp", 00:17:02.740 "traddr": "10.0.0.2", 00:17:02.740 "adrfam": "ipv4", 00:17:02.740 "trsvcid": "4420", 00:17:02.740 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.740 "prchk_reftag": false, 00:17:02.740 "prchk_guard": false, 00:17:02.740 "hdgst": false, 00:17:02.740 "ddgst": false, 00:17:02.740 "dhchap_key": "key1", 00:17:02.740 "dhchap_ctrlr_key": "ckey1", 00:17:02.740 "method": "bdev_nvme_attach_controller", 00:17:02.740 "req_id": 1 00:17:02.740 } 00:17:02.740 Got JSON-RPC error response 00:17:02.740 response: 00:17:02.740 { 00:17:02.740 "code": -5, 00:17:02.740 "message": "Input/output error" 00:17:02.740 } 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 907209 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 907209 ']' 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 907209 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:02.740 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 907209 00:17:02.999 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:02.999 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:02.999 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 907209' 00:17:02.999 killing process with pid 907209 00:17:02.999 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 907209 00:17:02.999 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 907209 00:17:03.257 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=928982 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 928982 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 928982 ']' 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.258 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 928982 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 928982 ']' 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:03.516 14:18:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:03.774 14:18:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:04.709 00:17:04.709 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.709 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.709 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.967 { 00:17:04.967 "cntlid": 1, 00:17:04.967 "qid": 0, 00:17:04.967 "state": "enabled", 00:17:04.967 "thread": "nvmf_tgt_poll_group_000", 00:17:04.967 "listen_address": { 00:17:04.967 "trtype": "TCP", 00:17:04.967 "adrfam": "IPv4", 00:17:04.967 "traddr": "10.0.0.2", 00:17:04.967 "trsvcid": "4420" 00:17:04.967 }, 00:17:04.967 "peer_address": { 00:17:04.967 "trtype": "TCP", 00:17:04.967 "adrfam": "IPv4", 00:17:04.967 "traddr": "10.0.0.1", 00:17:04.967 "trsvcid": "38838" 00:17:04.967 }, 00:17:04.967 "auth": { 00:17:04.967 "state": "completed", 00:17:04.967 "digest": "sha512", 00:17:04.967 "dhgroup": "ffdhe8192" 00:17:04.967 } 00:17:04.967 } 00:17:04.967 ]' 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.967 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.227 14:18:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:Y2Y3OTNiMDM4OTQ1MTUzYWVkNGMxOWU2NTQ0ZmQ5NDI2NTU0N2YzNWU4MTc2MzdlZDc5OGVmNDkzYTNmNWUyYT16WCQ=: 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.163 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:06.163 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.422 14:18:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.680 request: 00:17:06.680 { 00:17:06.680 "name": "nvme0", 00:17:06.680 "trtype": "tcp", 00:17:06.680 "traddr": "10.0.0.2", 00:17:06.680 "adrfam": "ipv4", 00:17:06.680 "trsvcid": "4420", 00:17:06.680 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:06.680 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:06.680 "prchk_reftag": false, 00:17:06.680 "prchk_guard": false, 00:17:06.680 "hdgst": false, 00:17:06.680 "ddgst": false, 00:17:06.680 "dhchap_key": "key3", 00:17:06.680 "method": "bdev_nvme_attach_controller", 00:17:06.680 "req_id": 1 00:17:06.680 } 00:17:06.680 Got JSON-RPC error response 00:17:06.680 response: 00:17:06.680 { 00:17:06.680 "code": -5, 00:17:06.680 "message": "Input/output error" 00:17:06.680 } 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.680 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.938 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.196 request: 00:17:07.196 { 00:17:07.196 "name": "nvme0", 00:17:07.196 "trtype": "tcp", 00:17:07.196 "traddr": "10.0.0.2", 00:17:07.196 "adrfam": "ipv4", 00:17:07.196 "trsvcid": "4420", 00:17:07.196 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.196 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:07.196 "prchk_reftag": false, 00:17:07.196 "prchk_guard": false, 00:17:07.196 "hdgst": false, 00:17:07.196 "ddgst": false, 00:17:07.196 "dhchap_key": "key3", 00:17:07.196 "method": "bdev_nvme_attach_controller", 00:17:07.196 "req_id": 1 00:17:07.196 } 00:17:07.196 Got JSON-RPC error response 00:17:07.196 response: 00:17:07.196 { 00:17:07.196 "code": -5, 00:17:07.196 "message": "Input/output error" 00:17:07.196 } 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.196 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.454 14:18:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.712 request: 00:17:07.712 { 00:17:07.712 "name": "nvme0", 00:17:07.712 "trtype": "tcp", 00:17:07.712 "traddr": "10.0.0.2", 00:17:07.712 "adrfam": "ipv4", 00:17:07.712 "trsvcid": "4420", 00:17:07.712 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:07.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:07.712 "prchk_reftag": false, 00:17:07.712 "prchk_guard": false, 00:17:07.712 "hdgst": false, 00:17:07.712 "ddgst": false, 00:17:07.712 "dhchap_key": "key0", 00:17:07.712 "dhchap_ctrlr_key": "key1", 00:17:07.712 "method": "bdev_nvme_attach_controller", 00:17:07.712 "req_id": 1 00:17:07.712 } 00:17:07.712 Got JSON-RPC error response 00:17:07.712 response: 00:17:07.712 { 00:17:07.712 "code": -5, 00:17:07.712 "message": "Input/output error" 00:17:07.712 } 00:17:07.712 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:07.712 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:07.712 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:07.712 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:07.712 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.713 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:07.970 00:17:07.970 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:07.970 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:07.970 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.228 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.228 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.228 14:18:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 907230 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 907230 ']' 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 907230 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 907230 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 907230' 00:17:08.487 killing process with pid 907230 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 907230 00:17:08.487 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 907230 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:09.054 rmmod nvme_tcp 00:17:09.054 rmmod nvme_fabrics 00:17:09.054 rmmod nvme_keyring 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 928982 ']' 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 928982 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 928982 ']' 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 928982 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 928982 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 928982' 00:17:09.054 killing process with pid 928982 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 928982 00:17:09.054 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 928982 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.313 14:18:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.845 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.845 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.jkN /tmp/spdk.key-sha256.ieG /tmp/spdk.key-sha384.G8K /tmp/spdk.key-sha512.fFr /tmp/spdk.key-sha512.eCy /tmp/spdk.key-sha384.IIU /tmp/spdk.key-sha256.GLi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:11.845 00:17:11.845 real 3m1.459s 00:17:11.845 user 7m4.265s 00:17:11.845 sys 0m25.094s 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.846 ************************************ 00:17:11.846 END TEST nvmf_auth_target 00:17:11.846 ************************************ 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.846 ************************************ 00:17:11.846 START TEST nvmf_bdevio_no_huge 00:17:11.846 ************************************ 00:17:11.846 14:18:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:11.846 * Looking for test storage... 00:17:11.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.846 14:18:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.753 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.753 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.753 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.753 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.754 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.754 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.754 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:17:13.754 00:17:13.754 --- 10.0.0.2 ping statistics --- 00:17:13.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.755 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:17:13.755 00:17:13.755 --- 10.0.0.1 ping statistics --- 00:17:13.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.755 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=931747 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 931747 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 931747 ']' 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:13.755 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:13.755 [2024-07-25 14:18:43.307512] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:13.755 [2024-07-25 14:18:43.307603] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:13.755 [2024-07-25 14:18:43.380277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.013 [2024-07-25 14:18:43.490973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.013 [2024-07-25 14:18:43.491027] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.013 [2024-07-25 14:18:43.491041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.013 [2024-07-25 14:18:43.491052] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.013 [2024-07-25 14:18:43.491085] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.013 [2024-07-25 14:18:43.491198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:14.013 [2024-07-25 14:18:43.491268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:14.013 [2024-07-25 14:18:43.491321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:14.013 [2024-07-25 14:18:43.491323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 [2024-07-25 14:18:43.616469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 Malloc0 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.013 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:14.013 [2024-07-25 14:18:43.654828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:14.014 { 00:17:14.014 "params": { 00:17:14.014 "name": "Nvme$subsystem", 00:17:14.014 "trtype": "$TEST_TRANSPORT", 00:17:14.014 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:14.014 "adrfam": "ipv4", 00:17:14.014 "trsvcid": "$NVMF_PORT", 00:17:14.014 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:14.014 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:14.014 "hdgst": ${hdgst:-false}, 00:17:14.014 "ddgst": ${ddgst:-false} 00:17:14.014 }, 00:17:14.014 "method": "bdev_nvme_attach_controller" 00:17:14.014 } 00:17:14.014 EOF 00:17:14.014 )") 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:14.014 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:14.271 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:14.271 14:18:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:14.271 "params": { 00:17:14.271 "name": "Nvme1", 00:17:14.271 "trtype": "tcp", 00:17:14.271 "traddr": "10.0.0.2", 00:17:14.271 "adrfam": "ipv4", 00:17:14.271 "trsvcid": "4420", 00:17:14.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:14.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:14.271 "hdgst": false, 00:17:14.271 "ddgst": false 00:17:14.271 }, 00:17:14.271 "method": "bdev_nvme_attach_controller" 00:17:14.271 }' 00:17:14.271 [2024-07-25 14:18:43.702427] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:14.271 [2024-07-25 14:18:43.702526] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid931776 ] 00:17:14.271 [2024-07-25 14:18:43.765048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.271 [2024-07-25 14:18:43.879407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.271 [2024-07-25 14:18:43.879822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.271 [2024-07-25 14:18:43.879829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.530 I/O targets: 00:17:14.530 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:14.530 00:17:14.530 00:17:14.530 CUnit - A unit testing framework for C - Version 2.1-3 00:17:14.530 http://cunit.sourceforge.net/ 00:17:14.530 00:17:14.530 00:17:14.530 Suite: bdevio tests on: Nvme1n1 00:17:14.530 Test: blockdev write read block ...passed 00:17:14.530 Test: blockdev write zeroes read block ...passed 00:17:14.530 Test: blockdev write zeroes read no split ...passed 00:17:14.530 Test: blockdev write zeroes read split ...passed 00:17:14.530 Test: blockdev write zeroes read split partial ...passed 00:17:14.530 Test: blockdev reset ...[2024-07-25 14:18:44.159355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:14.530 [2024-07-25 14:18:44.159474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf4fb0 (9): Bad file descriptor 00:17:14.790 [2024-07-25 14:18:44.268852] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:14.790 passed 00:17:14.790 Test: blockdev write read 8 blocks ...passed 00:17:14.790 Test: blockdev write read size > 128k ...passed 00:17:14.790 Test: blockdev write read invalid size ...passed 00:17:14.790 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:14.790 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:14.790 Test: blockdev write read max offset ...passed 00:17:14.790 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:14.790 Test: blockdev writev readv 8 blocks ...passed 00:17:14.790 Test: blockdev writev readv 30 x 1block ...passed 00:17:15.050 Test: blockdev writev readv block ...passed 00:17:15.050 Test: blockdev writev readv size > 128k ...passed 00:17:15.050 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:15.050 Test: blockdev comparev and writev ...[2024-07-25 14:18:44.520227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.520264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.520288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.520305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.520638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.520662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.520683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.520700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.521040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.521071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.521096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.521112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.521439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.521463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.521486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:15.050 [2024-07-25 14:18:44.521503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:15.050 passed 00:17:15.050 Test: blockdev nvme passthru rw ...passed 00:17:15.050 Test: blockdev nvme passthru vendor specific ...[2024-07-25 14:18:44.603312] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:15.050 [2024-07-25 14:18:44.603340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.603483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:15.050 [2024-07-25 14:18:44.603512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.603648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:15.050 [2024-07-25 14:18:44.603672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:15.050 [2024-07-25 14:18:44.603818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:15.050 [2024-07-25 14:18:44.603841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:15.050 passed 00:17:15.050 Test: blockdev nvme admin passthru ...passed 00:17:15.050 Test: blockdev copy ...passed 00:17:15.050 00:17:15.050 Run Summary: Type Total Ran Passed Failed Inactive 00:17:15.050 suites 1 1 n/a 0 0 00:17:15.051 tests 23 23 23 0 0 00:17:15.051 asserts 152 152 152 0 n/a 00:17:15.051 00:17:15.051 Elapsed time = 1.242 seconds 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:15.620 rmmod nvme_tcp 00:17:15.620 rmmod nvme_fabrics 00:17:15.620 rmmod nvme_keyring 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 931747 ']' 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 931747 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 931747 ']' 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 931747 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 931747 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 931747' 00:17:15.620 killing process with pid 931747 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 931747 00:17:15.620 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 931747 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:15.877 14:18:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.434 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:18.434 00:17:18.434 real 0m6.585s 00:17:18.434 user 0m10.620s 00:17:18.434 sys 0m2.532s 00:17:18.434 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.434 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.434 ************************************ 00:17:18.434 END TEST nvmf_bdevio_no_huge 00:17:18.434 ************************************ 00:17:18.434 14:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:18.435 ************************************ 00:17:18.435 START TEST nvmf_tls 00:17:18.435 ************************************ 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:18.435 * Looking for test storage... 00:17:18.435 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:18.435 14:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.339 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:20.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:20.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:20.340 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:20.340 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:20.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:17:20.340 00:17:20.340 --- 10.0.0.2 ping statistics --- 00:17:20.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.340 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:17:20.340 00:17:20.340 --- 10.0.0.1 ping statistics --- 00:17:20.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.340 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:20.340 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=933961 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 933961 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 933961 ']' 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:20.341 14:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.599 [2024-07-25 14:18:50.018672] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:20.599 [2024-07-25 14:18:50.018770] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.599 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.599 [2024-07-25 14:18:50.085298] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.599 [2024-07-25 14:18:50.195304] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.599 [2024-07-25 14:18:50.195373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.599 [2024-07-25 14:18:50.195399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:20.599 [2024-07-25 14:18:50.195410] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:20.599 [2024-07-25 14:18:50.195420] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.599 [2024-07-25 14:18:50.195445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:20.599 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:20.856 true 00:17:20.856 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:20.856 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:21.116 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:21.116 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:21.116 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:21.374 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.374 14:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:21.632 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:21.632 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:21.632 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:21.891 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:21.891 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:22.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:22.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:22.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:22.150 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:22.407 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:22.407 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:22.407 14:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:22.668 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:22.668 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:22.927 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:22.927 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:22.927 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:23.187 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:23.187 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:23.448 14:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.EKMYdlzHjT 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.kXIxIDMaKp 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.EKMYdlzHjT 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.kXIxIDMaKp 00:17:23.448 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:23.707 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:23.964 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.EKMYdlzHjT 00:17:23.964 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.EKMYdlzHjT 00:17:23.964 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:24.223 [2024-07-25 14:18:53.834708] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.223 14:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:24.482 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.742 [2024-07-25 14:18:54.324076] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.742 [2024-07-25 14:18:54.324352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.742 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:25.001 malloc0 00:17:25.001 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:25.261 14:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EKMYdlzHjT 00:17:25.521 [2024-07-25 14:18:55.073689] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:25.521 14:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EKMYdlzHjT 00:17:25.521 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.737 Initializing NVMe Controllers 00:17:37.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:37.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:37.737 Initialization complete. Launching workers. 00:17:37.737 ======================================================== 00:17:37.737 Latency(us) 00:17:37.737 Device Information : IOPS MiB/s Average min max 00:17:37.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8760.89 34.22 7307.14 1110.60 8780.42 00:17:37.737 ======================================================== 00:17:37.737 Total : 8760.89 34.22 7307.14 1110.60 8780.42 00:17:37.737 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EKMYdlzHjT 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EKMYdlzHjT' 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=935739 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 935739 /var/tmp/bdevperf.sock 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 935739 ']' 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:37.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:37.737 [2024-07-25 14:19:05.245978] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:37.737 [2024-07-25 14:19:05.246051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid935739 ] 00:17:37.737 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.737 [2024-07-25 14:19:05.302167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.737 [2024-07-25 14:19:05.408026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:37.737 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EKMYdlzHjT 00:17:37.737 [2024-07-25 14:19:05.736896] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:37.737 [2024-07-25 14:19:05.737010] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:37.737 TLSTESTn1 00:17:37.738 14:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:37.738 Running I/O for 10 seconds... 00:17:47.719 00:17:47.719 Latency(us) 00:17:47.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.719 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:47.719 Verification LBA range: start 0x0 length 0x2000 00:17:47.719 TLSTESTn1 : 10.02 3562.65 13.92 0.00 0.00 35867.22 6456.51 29321.29 00:17:47.719 =================================================================================================================== 00:17:47.719 Total : 3562.65 13.92 0.00 0.00 35867.22 6456.51 29321.29 00:17:47.719 0 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 935739 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 935739 ']' 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 935739 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.719 14:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 935739 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 935739' 00:17:47.719 killing process with pid 935739 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 935739 00:17:47.719 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.719 00:17:47.719 Latency(us) 00:17:47.719 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.719 =================================================================================================================== 00:17:47.719 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:47.719 [2024-07-25 14:19:16.022368] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 935739 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXIxIDMaKp 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXIxIDMaKp 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.719 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kXIxIDMaKp 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.kXIxIDMaKp' 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937037 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937037 /var/tmp/bdevperf.sock 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937037 ']' 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.720 [2024-07-25 14:19:16.331018] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:47.720 [2024-07-25 14:19:16.331129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937037 ] 00:17:47.720 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.720 [2024-07-25 14:19:16.389083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.720 [2024-07-25 14:19:16.492308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.kXIxIDMaKp 00:17:47.720 [2024-07-25 14:19:16.835845] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.720 [2024-07-25 14:19:16.835951] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:47.720 [2024-07-25 14:19:16.846751] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:47.720 [2024-07-25 14:19:16.847735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf97f90 (107): Transport endpoint is not connected 00:17:47.720 [2024-07-25 14:19:16.848726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf97f90 (9): Bad file descriptor 00:17:47.720 [2024-07-25 14:19:16.849730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:47.720 [2024-07-25 14:19:16.849750] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:47.720 [2024-07-25 14:19:16.849775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:47.720 request: 00:17:47.720 { 00:17:47.720 "name": "TLSTEST", 00:17:47.720 "trtype": "tcp", 00:17:47.720 "traddr": "10.0.0.2", 00:17:47.720 "adrfam": "ipv4", 00:17:47.720 "trsvcid": "4420", 00:17:47.720 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:47.720 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:47.720 "prchk_reftag": false, 00:17:47.720 "prchk_guard": false, 00:17:47.720 "hdgst": false, 00:17:47.720 "ddgst": false, 00:17:47.720 "psk": "/tmp/tmp.kXIxIDMaKp", 00:17:47.720 "method": "bdev_nvme_attach_controller", 00:17:47.720 "req_id": 1 00:17:47.720 } 00:17:47.720 Got JSON-RPC error response 00:17:47.720 response: 00:17:47.720 { 00:17:47.720 "code": -5, 00:17:47.720 "message": "Input/output error" 00:17:47.720 } 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 937037 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937037 ']' 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937037 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937037 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937037' 00:17:47.720 killing process with pid 937037 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937037 00:17:47.720 Received shutdown signal, test time was about 10.000000 seconds 00:17:47.720 00:17:47.720 Latency(us) 00:17:47.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.720 =================================================================================================================== 00:17:47.720 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.720 [2024-07-25 14:19:16.890938] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:47.720 14:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937037 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EKMYdlzHjT 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EKMYdlzHjT 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EKMYdlzHjT 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EKMYdlzHjT' 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937073 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937073 /var/tmp/bdevperf.sock 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937073 ']' 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:47.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.720 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:47.720 [2024-07-25 14:19:17.161699] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:47.721 [2024-07-25 14:19:17.161785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937073 ] 00:17:47.721 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.721 [2024-07-25 14:19:17.220361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.721 [2024-07-25 14:19:17.330690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.979 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:47.979 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:47.979 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.EKMYdlzHjT 00:17:48.238 [2024-07-25 14:19:17.660327] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:48.238 [2024-07-25 14:19:17.660459] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:48.238 [2024-07-25 14:19:17.668836] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:48.238 [2024-07-25 14:19:17.668870] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:48.238 [2024-07-25 14:19:17.668910] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:48.238 [2024-07-25 14:19:17.669335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387f90 (107): Transport endpoint is not connected 00:17:48.238 [2024-07-25 14:19:17.670323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387f90 (9): Bad file descriptor 00:17:48.238 [2024-07-25 14:19:17.671322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:48.238 [2024-07-25 14:19:17.671344] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:48.238 [2024-07-25 14:19:17.671376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:48.238 request: 00:17:48.238 { 00:17:48.238 "name": "TLSTEST", 00:17:48.238 "trtype": "tcp", 00:17:48.238 "traddr": "10.0.0.2", 00:17:48.238 "adrfam": "ipv4", 00:17:48.238 "trsvcid": "4420", 00:17:48.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:48.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:48.238 "prchk_reftag": false, 00:17:48.238 "prchk_guard": false, 00:17:48.238 "hdgst": false, 00:17:48.238 "ddgst": false, 00:17:48.238 "psk": "/tmp/tmp.EKMYdlzHjT", 00:17:48.238 "method": "bdev_nvme_attach_controller", 00:17:48.238 "req_id": 1 00:17:48.238 } 00:17:48.238 Got JSON-RPC error response 00:17:48.238 response: 00:17:48.238 { 00:17:48.238 "code": -5, 00:17:48.238 "message": "Input/output error" 00:17:48.238 } 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 937073 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937073 ']' 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937073 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937073 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937073' 00:17:48.238 killing process with pid 937073 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937073 00:17:48.238 Received shutdown signal, test time was about 10.000000 seconds 00:17:48.238 00:17:48.238 Latency(us) 00:17:48.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.238 =================================================================================================================== 00:17:48.238 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:48.238 [2024-07-25 14:19:17.722169] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:48.238 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937073 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EKMYdlzHjT 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EKMYdlzHjT 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EKMYdlzHjT 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EKMYdlzHjT' 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937211 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937211 /var/tmp/bdevperf.sock 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937211 ']' 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.496 14:19:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:48.496 [2024-07-25 14:19:18.022241] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:48.496 [2024-07-25 14:19:18.022332] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937211 ] 00:17:48.496 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.496 [2024-07-25 14:19:18.080512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.755 [2024-07-25 14:19:18.183295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.755 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.755 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:48.755 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EKMYdlzHjT 00:17:49.022 [2024-07-25 14:19:18.527991] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.022 [2024-07-25 14:19:18.528128] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:49.022 [2024-07-25 14:19:18.533574] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:49.022 [2024-07-25 14:19:18.533609] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:49.022 [2024-07-25 14:19:18.533650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:49.022 [2024-07-25 14:19:18.534126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b42f90 (107): Transport endpoint is not connected 00:17:49.022 [2024-07-25 14:19:18.535125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b42f90 (9): Bad file descriptor 00:17:49.022 [2024-07-25 14:19:18.536124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:49.022 [2024-07-25 14:19:18.536147] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:49.022 [2024-07-25 14:19:18.536165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:49.022 request: 00:17:49.022 { 00:17:49.022 "name": "TLSTEST", 00:17:49.022 "trtype": "tcp", 00:17:49.022 "traddr": "10.0.0.2", 00:17:49.022 "adrfam": "ipv4", 00:17:49.022 "trsvcid": "4420", 00:17:49.022 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:49.022 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.022 "prchk_reftag": false, 00:17:49.022 "prchk_guard": false, 00:17:49.022 "hdgst": false, 00:17:49.022 "ddgst": false, 00:17:49.022 "psk": "/tmp/tmp.EKMYdlzHjT", 00:17:49.022 "method": "bdev_nvme_attach_controller", 00:17:49.022 "req_id": 1 00:17:49.022 } 00:17:49.022 Got JSON-RPC error response 00:17:49.022 response: 00:17:49.022 { 00:17:49.022 "code": -5, 00:17:49.022 "message": "Input/output error" 00:17:49.022 } 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 937211 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937211 ']' 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937211 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937211 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937211' 00:17:49.022 killing process with pid 937211 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937211 00:17:49.022 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.022 00:17:49.022 Latency(us) 00:17:49.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.022 =================================================================================================================== 00:17:49.022 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.022 [2024-07-25 14:19:18.579950] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:49.022 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937211 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937348 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937348 /var/tmp/bdevperf.sock 00:17:49.292 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937348 ']' 00:17:49.293 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:49.293 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:49.293 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:49.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:49.293 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:49.293 14:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.293 [2024-07-25 14:19:18.863953] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:49.293 [2024-07-25 14:19:18.864040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937348 ] 00:17:49.293 EAL: No free 2048 kB hugepages reported on node 1 00:17:49.293 [2024-07-25 14:19:18.923290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.551 [2024-07-25 14:19:19.033149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.551 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.551 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:49.551 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:49.810 [2024-07-25 14:19:19.373274] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:49.810 [2024-07-25 14:19:19.375307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7e770 (9): Bad file descriptor 00:17:49.810 [2024-07-25 14:19:19.376302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:49.810 [2024-07-25 14:19:19.376323] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:49.810 [2024-07-25 14:19:19.376348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:49.810 request: 00:17:49.810 { 00:17:49.810 "name": "TLSTEST", 00:17:49.810 "trtype": "tcp", 00:17:49.810 "traddr": "10.0.0.2", 00:17:49.810 "adrfam": "ipv4", 00:17:49.810 "trsvcid": "4420", 00:17:49.810 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:49.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:49.810 "prchk_reftag": false, 00:17:49.810 "prchk_guard": false, 00:17:49.810 "hdgst": false, 00:17:49.810 "ddgst": false, 00:17:49.810 "method": "bdev_nvme_attach_controller", 00:17:49.810 "req_id": 1 00:17:49.810 } 00:17:49.810 Got JSON-RPC error response 00:17:49.810 response: 00:17:49.810 { 00:17:49.810 "code": -5, 00:17:49.810 "message": "Input/output error" 00:17:49.810 } 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 937348 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937348 ']' 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937348 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:49.810 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937348 00:17:49.811 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:49.811 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:49.811 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937348' 00:17:49.811 killing process with pid 937348 00:17:49.811 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937348 00:17:49.811 Received shutdown signal, test time was about 10.000000 seconds 00:17:49.811 00:17:49.811 Latency(us) 00:17:49.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.811 =================================================================================================================== 00:17:49.811 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:49.811 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937348 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 933961 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 933961 ']' 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 933961 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.070 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 933961 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 933961' 00:17:50.071 killing process with pid 933961 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 933961 00:17:50.071 [2024-07-25 14:19:19.710094] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:50.071 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 933961 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:50.641 14:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.3vuAI12gxd 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.3vuAI12gxd 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=937500 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 937500 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937500 ']' 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.641 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.642 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.642 [2024-07-25 14:19:20.100440] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:50.642 [2024-07-25 14:19:20.100526] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.642 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.642 [2024-07-25 14:19:20.168649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.642 [2024-07-25 14:19:20.280028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.642 [2024-07-25 14:19:20.280106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.642 [2024-07-25 14:19:20.280122] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.642 [2024-07-25 14:19:20.280134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.642 [2024-07-25 14:19:20.280144] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.642 [2024-07-25 14:19:20.280183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3vuAI12gxd 00:17:50.900 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:51.158 [2024-07-25 14:19:20.699844] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.158 14:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:51.416 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:51.674 [2024-07-25 14:19:21.269369] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:51.674 [2024-07-25 14:19:21.269589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.674 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:51.932 malloc0 00:17:51.932 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:52.190 14:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:17:52.448 [2024-07-25 14:19:22.045576] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3vuAI12gxd 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3vuAI12gxd' 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=937782 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 937782 /var/tmp/bdevperf.sock 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 937782 ']' 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.448 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.706 [2024-07-25 14:19:22.104220] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:17:52.706 [2024-07-25 14:19:22.104301] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid937782 ] 00:17:52.706 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.706 [2024-07-25 14:19:22.162890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.706 [2024-07-25 14:19:22.270905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.966 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.966 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.966 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:17:52.966 [2024-07-25 14:19:22.600279] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.966 [2024-07-25 14:19:22.600436] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:53.225 TLSTESTn1 00:17:53.226 14:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:53.226 Running I/O for 10 seconds... 00:18:03.207 00:18:03.207 Latency(us) 00:18:03.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.207 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:03.207 Verification LBA range: start 0x0 length 0x2000 00:18:03.207 TLSTESTn1 : 10.03 3508.30 13.70 0.00 0.00 36413.72 6505.05 56700.78 00:18:03.207 =================================================================================================================== 00:18:03.207 Total : 3508.30 13.70 0.00 0.00 36413.72 6505.05 56700.78 00:18:03.207 0 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 937782 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937782 ']' 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937782 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937782 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937782' 00:18:03.466 killing process with pid 937782 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937782 00:18:03.466 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.466 00:18:03.466 Latency(us) 00:18:03.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.466 =================================================================================================================== 00:18:03.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.466 [2024-07-25 14:19:32.893204] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:03.466 14:19:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937782 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.3vuAI12gxd 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3vuAI12gxd 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3vuAI12gxd 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.3vuAI12gxd 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.3vuAI12gxd' 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=938983 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 938983 /var/tmp/bdevperf.sock 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 938983 ']' 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.725 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.725 [2024-07-25 14:19:33.212378] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:03.725 [2024-07-25 14:19:33.212468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid938983 ] 00:18:03.725 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.725 [2024-07-25 14:19:33.271125] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.982 [2024-07-25 14:19:33.382206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.982 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.982 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.982 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:18:04.239 [2024-07-25 14:19:33.708758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:04.239 [2024-07-25 14:19:33.708848] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:04.239 [2024-07-25 14:19:33.708862] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.3vuAI12gxd 00:18:04.239 request: 00:18:04.239 { 00:18:04.239 "name": "TLSTEST", 00:18:04.239 "trtype": "tcp", 00:18:04.239 "traddr": "10.0.0.2", 00:18:04.239 "adrfam": "ipv4", 00:18:04.239 "trsvcid": "4420", 00:18:04.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.239 "prchk_reftag": false, 00:18:04.239 "prchk_guard": false, 00:18:04.239 "hdgst": false, 00:18:04.239 "ddgst": false, 00:18:04.239 "psk": "/tmp/tmp.3vuAI12gxd", 00:18:04.239 "method": "bdev_nvme_attach_controller", 00:18:04.239 "req_id": 1 00:18:04.239 } 00:18:04.239 Got JSON-RPC error response 00:18:04.239 response: 00:18:04.239 { 00:18:04.239 "code": -1, 00:18:04.239 "message": "Operation not permitted" 00:18:04.239 } 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 938983 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 938983 ']' 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 938983 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938983 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938983' 00:18:04.239 killing process with pid 938983 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 938983 00:18:04.239 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.239 00:18:04.239 Latency(us) 00:18:04.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.239 =================================================================================================================== 00:18:04.239 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.239 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 938983 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 937500 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 937500 ']' 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 937500 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.498 14:19:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 937500 00:18:04.498 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.498 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.498 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 937500' 00:18:04.498 killing process with pid 937500 00:18:04.498 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 937500 00:18:04.498 [2024-07-25 14:19:34.016671] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.498 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 937500 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=939128 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 939128 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 939128 ']' 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:04.757 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.757 [2024-07-25 14:19:34.348553] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:04.757 [2024-07-25 14:19:34.348639] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.757 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.014 [2024-07-25 14:19:34.413052] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.014 [2024-07-25 14:19:34.520041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.014 [2024-07-25 14:19:34.520103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.014 [2024-07-25 14:19:34.520117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.014 [2024-07-25 14:19:34.520127] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.015 [2024-07-25 14:19:34.520137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.015 [2024-07-25 14:19:34.520165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3vuAI12gxd 00:18:05.015 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:05.579 [2024-07-25 14:19:34.933975] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.579 14:19:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:05.579 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:05.836 [2024-07-25 14:19:35.435383] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.836 [2024-07-25 14:19:35.435662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.836 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.094 malloc0 00:18:06.094 14:19:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:18:06.660 [2024-07-25 14:19:36.268427] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:06.660 [2024-07-25 14:19:36.268459] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:06.660 [2024-07-25 14:19:36.268498] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:06.660 request: 00:18:06.660 { 00:18:06.660 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.660 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.660 "psk": "/tmp/tmp.3vuAI12gxd", 00:18:06.660 "method": "nvmf_subsystem_add_host", 00:18:06.660 "req_id": 1 00:18:06.660 } 00:18:06.660 Got JSON-RPC error response 00:18:06.660 response: 00:18:06.660 { 00:18:06.660 "code": -32603, 00:18:06.660 "message": "Internal error" 00:18:06.660 } 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 939128 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 939128 ']' 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 939128 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.660 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939128 00:18:06.919 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:06.919 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:06.919 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939128' 00:18:06.919 killing process with pid 939128 00:18:06.919 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 939128 00:18:06.919 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 939128 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.3vuAI12gxd 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=939424 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 939424 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 939424 ']' 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.177 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.177 [2024-07-25 14:19:36.647269] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:07.177 [2024-07-25 14:19:36.647372] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.177 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.177 [2024-07-25 14:19:36.712661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.177 [2024-07-25 14:19:36.821697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.177 [2024-07-25 14:19:36.821751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.177 [2024-07-25 14:19:36.821775] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.177 [2024-07-25 14:19:36.821786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.177 [2024-07-25 14:19:36.821796] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.177 [2024-07-25 14:19:36.821821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3vuAI12gxd 00:18:07.435 14:19:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.694 [2024-07-25 14:19:37.234336] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.694 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:07.951 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:08.209 [2024-07-25 14:19:37.719586] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.209 [2024-07-25 14:19:37.719808] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.209 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.467 malloc0 00:18:08.467 14:19:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:08.725 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:18:08.983 [2024-07-25 14:19:38.521154] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=939706 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 939706 /var/tmp/bdevperf.sock 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 939706 ']' 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:08.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:08.983 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:08.984 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.984 [2024-07-25 14:19:38.584183] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:08.984 [2024-07-25 14:19:38.584265] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid939706 ] 00:18:08.984 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.242 [2024-07-25 14:19:38.642926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.242 [2024-07-25 14:19:38.751775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.242 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:09.242 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:09.242 14:19:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:18:09.502 [2024-07-25 14:19:39.100449] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:09.502 [2024-07-25 14:19:39.100580] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:09.760 TLSTESTn1 00:18:09.760 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:10.019 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:10.019 "subsystems": [ 00:18:10.019 { 00:18:10.019 "subsystem": "keyring", 00:18:10.019 "config": [] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "iobuf", 00:18:10.019 "config": [ 00:18:10.019 { 00:18:10.019 "method": "iobuf_set_options", 00:18:10.019 "params": { 00:18:10.019 "small_pool_count": 8192, 00:18:10.019 "large_pool_count": 1024, 00:18:10.019 "small_bufsize": 8192, 00:18:10.019 "large_bufsize": 135168 00:18:10.019 } 00:18:10.019 } 00:18:10.019 ] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "sock", 00:18:10.019 "config": [ 00:18:10.019 { 00:18:10.019 "method": "sock_set_default_impl", 00:18:10.019 "params": { 00:18:10.019 "impl_name": "posix" 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "sock_impl_set_options", 00:18:10.019 "params": { 00:18:10.019 "impl_name": "ssl", 00:18:10.019 "recv_buf_size": 4096, 00:18:10.019 "send_buf_size": 4096, 00:18:10.019 "enable_recv_pipe": true, 00:18:10.019 "enable_quickack": false, 00:18:10.019 "enable_placement_id": 0, 00:18:10.019 "enable_zerocopy_send_server": true, 00:18:10.019 "enable_zerocopy_send_client": false, 00:18:10.019 "zerocopy_threshold": 0, 00:18:10.019 "tls_version": 0, 00:18:10.019 "enable_ktls": false 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "sock_impl_set_options", 00:18:10.019 "params": { 00:18:10.019 "impl_name": "posix", 00:18:10.019 "recv_buf_size": 2097152, 00:18:10.019 "send_buf_size": 2097152, 00:18:10.019 "enable_recv_pipe": true, 00:18:10.019 "enable_quickack": false, 00:18:10.019 "enable_placement_id": 0, 00:18:10.019 "enable_zerocopy_send_server": true, 00:18:10.019 "enable_zerocopy_send_client": false, 00:18:10.019 "zerocopy_threshold": 0, 00:18:10.019 "tls_version": 0, 00:18:10.019 "enable_ktls": false 00:18:10.019 } 00:18:10.019 } 00:18:10.019 ] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "vmd", 00:18:10.019 "config": [] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "accel", 00:18:10.019 "config": [ 00:18:10.019 { 00:18:10.019 "method": "accel_set_options", 00:18:10.019 "params": { 00:18:10.019 "small_cache_size": 128, 00:18:10.019 "large_cache_size": 16, 00:18:10.019 "task_count": 2048, 00:18:10.019 "sequence_count": 2048, 00:18:10.019 "buf_count": 2048 00:18:10.019 } 00:18:10.019 } 00:18:10.019 ] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "bdev", 00:18:10.019 "config": [ 00:18:10.019 { 00:18:10.019 "method": "bdev_set_options", 00:18:10.019 "params": { 00:18:10.019 "bdev_io_pool_size": 65535, 00:18:10.019 "bdev_io_cache_size": 256, 00:18:10.019 "bdev_auto_examine": true, 00:18:10.019 "iobuf_small_cache_size": 128, 00:18:10.019 "iobuf_large_cache_size": 16 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_raid_set_options", 00:18:10.019 "params": { 00:18:10.019 "process_window_size_kb": 1024, 00:18:10.019 "process_max_bandwidth_mb_sec": 0 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_iscsi_set_options", 00:18:10.019 "params": { 00:18:10.019 "timeout_sec": 30 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_nvme_set_options", 00:18:10.019 "params": { 00:18:10.019 "action_on_timeout": "none", 00:18:10.019 "timeout_us": 0, 00:18:10.019 "timeout_admin_us": 0, 00:18:10.019 "keep_alive_timeout_ms": 10000, 00:18:10.019 "arbitration_burst": 0, 00:18:10.019 "low_priority_weight": 0, 00:18:10.019 "medium_priority_weight": 0, 00:18:10.019 "high_priority_weight": 0, 00:18:10.019 "nvme_adminq_poll_period_us": 10000, 00:18:10.019 "nvme_ioq_poll_period_us": 0, 00:18:10.019 "io_queue_requests": 0, 00:18:10.019 "delay_cmd_submit": true, 00:18:10.019 "transport_retry_count": 4, 00:18:10.019 "bdev_retry_count": 3, 00:18:10.019 "transport_ack_timeout": 0, 00:18:10.019 "ctrlr_loss_timeout_sec": 0, 00:18:10.019 "reconnect_delay_sec": 0, 00:18:10.019 "fast_io_fail_timeout_sec": 0, 00:18:10.019 "disable_auto_failback": false, 00:18:10.019 "generate_uuids": false, 00:18:10.019 "transport_tos": 0, 00:18:10.019 "nvme_error_stat": false, 00:18:10.019 "rdma_srq_size": 0, 00:18:10.019 "io_path_stat": false, 00:18:10.019 "allow_accel_sequence": false, 00:18:10.019 "rdma_max_cq_size": 0, 00:18:10.019 "rdma_cm_event_timeout_ms": 0, 00:18:10.019 "dhchap_digests": [ 00:18:10.019 "sha256", 00:18:10.019 "sha384", 00:18:10.019 "sha512" 00:18:10.019 ], 00:18:10.019 "dhchap_dhgroups": [ 00:18:10.019 "null", 00:18:10.019 "ffdhe2048", 00:18:10.019 "ffdhe3072", 00:18:10.019 "ffdhe4096", 00:18:10.019 "ffdhe6144", 00:18:10.019 "ffdhe8192" 00:18:10.019 ] 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_nvme_set_hotplug", 00:18:10.019 "params": { 00:18:10.019 "period_us": 100000, 00:18:10.019 "enable": false 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_malloc_create", 00:18:10.019 "params": { 00:18:10.019 "name": "malloc0", 00:18:10.019 "num_blocks": 8192, 00:18:10.019 "block_size": 4096, 00:18:10.019 "physical_block_size": 4096, 00:18:10.019 "uuid": "9b5182e8-ac58-43ec-b9cd-15f4c6344d66", 00:18:10.019 "optimal_io_boundary": 0, 00:18:10.019 "md_size": 0, 00:18:10.019 "dif_type": 0, 00:18:10.019 "dif_is_head_of_md": false, 00:18:10.019 "dif_pi_format": 0 00:18:10.019 } 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "method": "bdev_wait_for_examine" 00:18:10.019 } 00:18:10.019 ] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "nbd", 00:18:10.019 "config": [] 00:18:10.019 }, 00:18:10.019 { 00:18:10.019 "subsystem": "scheduler", 00:18:10.020 "config": [ 00:18:10.020 { 00:18:10.020 "method": "framework_set_scheduler", 00:18:10.020 "params": { 00:18:10.020 "name": "static" 00:18:10.020 } 00:18:10.020 } 00:18:10.020 ] 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "subsystem": "nvmf", 00:18:10.020 "config": [ 00:18:10.020 { 00:18:10.020 "method": "nvmf_set_config", 00:18:10.020 "params": { 00:18:10.020 "discovery_filter": "match_any", 00:18:10.020 "admin_cmd_passthru": { 00:18:10.020 "identify_ctrlr": false 00:18:10.020 } 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_set_max_subsystems", 00:18:10.020 "params": { 00:18:10.020 "max_subsystems": 1024 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_set_crdt", 00:18:10.020 "params": { 00:18:10.020 "crdt1": 0, 00:18:10.020 "crdt2": 0, 00:18:10.020 "crdt3": 0 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_create_transport", 00:18:10.020 "params": { 00:18:10.020 "trtype": "TCP", 00:18:10.020 "max_queue_depth": 128, 00:18:10.020 "max_io_qpairs_per_ctrlr": 127, 00:18:10.020 "in_capsule_data_size": 4096, 00:18:10.020 "max_io_size": 131072, 00:18:10.020 "io_unit_size": 131072, 00:18:10.020 "max_aq_depth": 128, 00:18:10.020 "num_shared_buffers": 511, 00:18:10.020 "buf_cache_size": 4294967295, 00:18:10.020 "dif_insert_or_strip": false, 00:18:10.020 "zcopy": false, 00:18:10.020 "c2h_success": false, 00:18:10.020 "sock_priority": 0, 00:18:10.020 "abort_timeout_sec": 1, 00:18:10.020 "ack_timeout": 0, 00:18:10.020 "data_wr_pool_size": 0 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_create_subsystem", 00:18:10.020 "params": { 00:18:10.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.020 "allow_any_host": false, 00:18:10.020 "serial_number": "SPDK00000000000001", 00:18:10.020 "model_number": "SPDK bdev Controller", 00:18:10.020 "max_namespaces": 10, 00:18:10.020 "min_cntlid": 1, 00:18:10.020 "max_cntlid": 65519, 00:18:10.020 "ana_reporting": false 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_subsystem_add_host", 00:18:10.020 "params": { 00:18:10.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.020 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.020 "psk": "/tmp/tmp.3vuAI12gxd" 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_subsystem_add_ns", 00:18:10.020 "params": { 00:18:10.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.020 "namespace": { 00:18:10.020 "nsid": 1, 00:18:10.020 "bdev_name": "malloc0", 00:18:10.020 "nguid": "9B5182E8AC5843ECB9CD15F4C6344D66", 00:18:10.020 "uuid": "9b5182e8-ac58-43ec-b9cd-15f4c6344d66", 00:18:10.020 "no_auto_visible": false 00:18:10.020 } 00:18:10.020 } 00:18:10.020 }, 00:18:10.020 { 00:18:10.020 "method": "nvmf_subsystem_add_listener", 00:18:10.020 "params": { 00:18:10.020 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.020 "listen_address": { 00:18:10.020 "trtype": "TCP", 00:18:10.020 "adrfam": "IPv4", 00:18:10.020 "traddr": "10.0.0.2", 00:18:10.020 "trsvcid": "4420" 00:18:10.020 }, 00:18:10.020 "secure_channel": true 00:18:10.020 } 00:18:10.020 } 00:18:10.020 ] 00:18:10.020 } 00:18:10.020 ] 00:18:10.020 }' 00:18:10.020 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:10.280 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:10.280 "subsystems": [ 00:18:10.280 { 00:18:10.280 "subsystem": "keyring", 00:18:10.280 "config": [] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "iobuf", 00:18:10.280 "config": [ 00:18:10.280 { 00:18:10.280 "method": "iobuf_set_options", 00:18:10.280 "params": { 00:18:10.280 "small_pool_count": 8192, 00:18:10.280 "large_pool_count": 1024, 00:18:10.280 "small_bufsize": 8192, 00:18:10.280 "large_bufsize": 135168 00:18:10.280 } 00:18:10.280 } 00:18:10.280 ] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "sock", 00:18:10.280 "config": [ 00:18:10.280 { 00:18:10.280 "method": "sock_set_default_impl", 00:18:10.280 "params": { 00:18:10.280 "impl_name": "posix" 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "sock_impl_set_options", 00:18:10.280 "params": { 00:18:10.280 "impl_name": "ssl", 00:18:10.280 "recv_buf_size": 4096, 00:18:10.280 "send_buf_size": 4096, 00:18:10.280 "enable_recv_pipe": true, 00:18:10.280 "enable_quickack": false, 00:18:10.280 "enable_placement_id": 0, 00:18:10.280 "enable_zerocopy_send_server": true, 00:18:10.280 "enable_zerocopy_send_client": false, 00:18:10.280 "zerocopy_threshold": 0, 00:18:10.280 "tls_version": 0, 00:18:10.280 "enable_ktls": false 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "sock_impl_set_options", 00:18:10.280 "params": { 00:18:10.280 "impl_name": "posix", 00:18:10.280 "recv_buf_size": 2097152, 00:18:10.280 "send_buf_size": 2097152, 00:18:10.280 "enable_recv_pipe": true, 00:18:10.280 "enable_quickack": false, 00:18:10.280 "enable_placement_id": 0, 00:18:10.280 "enable_zerocopy_send_server": true, 00:18:10.280 "enable_zerocopy_send_client": false, 00:18:10.280 "zerocopy_threshold": 0, 00:18:10.280 "tls_version": 0, 00:18:10.280 "enable_ktls": false 00:18:10.280 } 00:18:10.280 } 00:18:10.280 ] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "vmd", 00:18:10.280 "config": [] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "accel", 00:18:10.280 "config": [ 00:18:10.280 { 00:18:10.280 "method": "accel_set_options", 00:18:10.280 "params": { 00:18:10.280 "small_cache_size": 128, 00:18:10.280 "large_cache_size": 16, 00:18:10.280 "task_count": 2048, 00:18:10.280 "sequence_count": 2048, 00:18:10.280 "buf_count": 2048 00:18:10.280 } 00:18:10.280 } 00:18:10.280 ] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "bdev", 00:18:10.280 "config": [ 00:18:10.280 { 00:18:10.280 "method": "bdev_set_options", 00:18:10.280 "params": { 00:18:10.280 "bdev_io_pool_size": 65535, 00:18:10.280 "bdev_io_cache_size": 256, 00:18:10.280 "bdev_auto_examine": true, 00:18:10.280 "iobuf_small_cache_size": 128, 00:18:10.280 "iobuf_large_cache_size": 16 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_raid_set_options", 00:18:10.280 "params": { 00:18:10.280 "process_window_size_kb": 1024, 00:18:10.280 "process_max_bandwidth_mb_sec": 0 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_iscsi_set_options", 00:18:10.280 "params": { 00:18:10.280 "timeout_sec": 30 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_nvme_set_options", 00:18:10.280 "params": { 00:18:10.280 "action_on_timeout": "none", 00:18:10.280 "timeout_us": 0, 00:18:10.280 "timeout_admin_us": 0, 00:18:10.280 "keep_alive_timeout_ms": 10000, 00:18:10.280 "arbitration_burst": 0, 00:18:10.280 "low_priority_weight": 0, 00:18:10.280 "medium_priority_weight": 0, 00:18:10.280 "high_priority_weight": 0, 00:18:10.280 "nvme_adminq_poll_period_us": 10000, 00:18:10.280 "nvme_ioq_poll_period_us": 0, 00:18:10.280 "io_queue_requests": 512, 00:18:10.280 "delay_cmd_submit": true, 00:18:10.280 "transport_retry_count": 4, 00:18:10.280 "bdev_retry_count": 3, 00:18:10.280 "transport_ack_timeout": 0, 00:18:10.280 "ctrlr_loss_timeout_sec": 0, 00:18:10.280 "reconnect_delay_sec": 0, 00:18:10.280 "fast_io_fail_timeout_sec": 0, 00:18:10.280 "disable_auto_failback": false, 00:18:10.280 "generate_uuids": false, 00:18:10.280 "transport_tos": 0, 00:18:10.280 "nvme_error_stat": false, 00:18:10.280 "rdma_srq_size": 0, 00:18:10.280 "io_path_stat": false, 00:18:10.280 "allow_accel_sequence": false, 00:18:10.280 "rdma_max_cq_size": 0, 00:18:10.280 "rdma_cm_event_timeout_ms": 0, 00:18:10.280 "dhchap_digests": [ 00:18:10.280 "sha256", 00:18:10.280 "sha384", 00:18:10.280 "sha512" 00:18:10.280 ], 00:18:10.280 "dhchap_dhgroups": [ 00:18:10.280 "null", 00:18:10.280 "ffdhe2048", 00:18:10.280 "ffdhe3072", 00:18:10.280 "ffdhe4096", 00:18:10.280 "ffdhe6144", 00:18:10.280 "ffdhe8192" 00:18:10.280 ] 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_nvme_attach_controller", 00:18:10.280 "params": { 00:18:10.280 "name": "TLSTEST", 00:18:10.280 "trtype": "TCP", 00:18:10.280 "adrfam": "IPv4", 00:18:10.280 "traddr": "10.0.0.2", 00:18:10.280 "trsvcid": "4420", 00:18:10.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.280 "prchk_reftag": false, 00:18:10.280 "prchk_guard": false, 00:18:10.280 "ctrlr_loss_timeout_sec": 0, 00:18:10.280 "reconnect_delay_sec": 0, 00:18:10.280 "fast_io_fail_timeout_sec": 0, 00:18:10.280 "psk": "/tmp/tmp.3vuAI12gxd", 00:18:10.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:10.280 "hdgst": false, 00:18:10.280 "ddgst": false 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_nvme_set_hotplug", 00:18:10.280 "params": { 00:18:10.280 "period_us": 100000, 00:18:10.280 "enable": false 00:18:10.280 } 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "method": "bdev_wait_for_examine" 00:18:10.280 } 00:18:10.280 ] 00:18:10.280 }, 00:18:10.280 { 00:18:10.280 "subsystem": "nbd", 00:18:10.280 "config": [] 00:18:10.280 } 00:18:10.280 ] 00:18:10.280 }' 00:18:10.280 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 939706 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 939706 ']' 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 939706 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939706 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939706' 00:18:10.281 killing process with pid 939706 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 939706 00:18:10.281 Received shutdown signal, test time was about 10.000000 seconds 00:18:10.281 00:18:10.281 Latency(us) 00:18:10.281 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.281 =================================================================================================================== 00:18:10.281 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:10.281 [2024-07-25 14:19:39.902287] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:10.281 14:19:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 939706 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 939424 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 939424 ']' 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 939424 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939424 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939424' 00:18:10.540 killing process with pid 939424 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 939424 00:18:10.540 [2024-07-25 14:19:40.187585] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:10.540 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 939424 00:18:10.830 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:10.830 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:10.830 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:10.830 "subsystems": [ 00:18:10.830 { 00:18:10.830 "subsystem": "keyring", 00:18:10.830 "config": [] 00:18:10.830 }, 00:18:10.830 { 00:18:10.830 "subsystem": "iobuf", 00:18:10.830 "config": [ 00:18:10.830 { 00:18:10.830 "method": "iobuf_set_options", 00:18:10.830 "params": { 00:18:10.830 "small_pool_count": 8192, 00:18:10.830 "large_pool_count": 1024, 00:18:10.830 "small_bufsize": 8192, 00:18:10.830 "large_bufsize": 135168 00:18:10.830 } 00:18:10.830 } 00:18:10.830 ] 00:18:10.830 }, 00:18:10.830 { 00:18:10.830 "subsystem": "sock", 00:18:10.830 "config": [ 00:18:10.830 { 00:18:10.830 "method": "sock_set_default_impl", 00:18:10.830 "params": { 00:18:10.830 "impl_name": "posix" 00:18:10.830 } 00:18:10.830 }, 00:18:10.830 { 00:18:10.830 "method": "sock_impl_set_options", 00:18:10.830 "params": { 00:18:10.830 "impl_name": "ssl", 00:18:10.830 "recv_buf_size": 4096, 00:18:10.830 "send_buf_size": 4096, 00:18:10.830 "enable_recv_pipe": true, 00:18:10.830 "enable_quickack": false, 00:18:10.830 "enable_placement_id": 0, 00:18:10.830 "enable_zerocopy_send_server": true, 00:18:10.830 "enable_zerocopy_send_client": false, 00:18:10.830 "zerocopy_threshold": 0, 00:18:10.830 "tls_version": 0, 00:18:10.830 "enable_ktls": false 00:18:10.830 } 00:18:10.830 }, 00:18:10.830 { 00:18:10.830 "method": "sock_impl_set_options", 00:18:10.830 "params": { 00:18:10.830 "impl_name": "posix", 00:18:10.830 "recv_buf_size": 2097152, 00:18:10.830 "send_buf_size": 2097152, 00:18:10.830 "enable_recv_pipe": true, 00:18:10.830 "enable_quickack": false, 00:18:10.830 "enable_placement_id": 0, 00:18:10.830 "enable_zerocopy_send_server": true, 00:18:10.830 "enable_zerocopy_send_client": false, 00:18:10.831 "zerocopy_threshold": 0, 00:18:10.831 "tls_version": 0, 00:18:10.831 "enable_ktls": false 00:18:10.831 } 00:18:10.831 } 00:18:10.831 ] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "vmd", 00:18:10.831 "config": [] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "accel", 00:18:10.831 "config": [ 00:18:10.831 { 00:18:10.831 "method": "accel_set_options", 00:18:10.831 "params": { 00:18:10.831 "small_cache_size": 128, 00:18:10.831 "large_cache_size": 16, 00:18:10.831 "task_count": 2048, 00:18:10.831 "sequence_count": 2048, 00:18:10.831 "buf_count": 2048 00:18:10.831 } 00:18:10.831 } 00:18:10.831 ] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "bdev", 00:18:10.831 "config": [ 00:18:10.831 { 00:18:10.831 "method": "bdev_set_options", 00:18:10.831 "params": { 00:18:10.831 "bdev_io_pool_size": 65535, 00:18:10.831 "bdev_io_cache_size": 256, 00:18:10.831 "bdev_auto_examine": true, 00:18:10.831 "iobuf_small_cache_size": 128, 00:18:10.831 "iobuf_large_cache_size": 16 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_raid_set_options", 00:18:10.831 "params": { 00:18:10.831 "process_window_size_kb": 1024, 00:18:10.831 "process_max_bandwidth_mb_sec": 0 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_iscsi_set_options", 00:18:10.831 "params": { 00:18:10.831 "timeout_sec": 30 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_nvme_set_options", 00:18:10.831 "params": { 00:18:10.831 "action_on_timeout": "none", 00:18:10.831 "timeout_us": 0, 00:18:10.831 "timeout_admin_us": 0, 00:18:10.831 "keep_alive_timeout_ms": 10000, 00:18:10.831 "arbitration_burst": 0, 00:18:10.831 "low_priority_weight": 0, 00:18:10.831 "medium_priority_weight": 0, 00:18:10.831 "high_priority_weight": 0, 00:18:10.831 "nvme_adminq_poll_period_us": 10000, 00:18:10.831 "nvme_ioq_poll_period_us": 0, 00:18:10.831 "io_queue_requests": 0, 00:18:10.831 "delay_cmd_submit": true, 00:18:10.831 "transport_retry_count": 4, 00:18:10.831 "bdev_retry_count": 3, 00:18:10.831 "transport_ack_timeout": 0, 00:18:10.831 "ctrlr_loss_timeout_sec": 0, 00:18:10.831 "reconnect_delay_sec": 0, 00:18:10.831 "fast_io_fail_timeout_sec": 0, 00:18:10.831 "disable_auto_failback": false, 00:18:10.831 "generate_uuids": false, 00:18:10.831 "transport_tos": 0, 00:18:10.831 "nvme_error_stat": false, 00:18:10.831 "rdma_srq_size": 0, 00:18:10.831 "io_path_stat": false, 00:18:10.831 "allow_accel_sequence": false, 00:18:10.831 "rdma_max_cq_size": 0, 00:18:10.831 "rdma_cm_event_timeout_ms": 0, 00:18:10.831 "dhchap_digests": [ 00:18:10.831 "sha256", 00:18:10.831 "sha384", 00:18:10.831 "sha512" 00:18:10.831 ], 00:18:10.831 "dhchap_dhgroups": [ 00:18:10.831 "null", 00:18:10.831 "ffdhe2048", 00:18:10.831 "ffdhe3072", 00:18:10.831 "ffdhe4096", 00:18:10.831 "ffdhe6144", 00:18:10.831 "ffdhe8192" 00:18:10.831 ] 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_nvme_set_hotplug", 00:18:10.831 "params": { 00:18:10.831 "period_us": 100000, 00:18:10.831 "enable": false 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_malloc_create", 00:18:10.831 "params": { 00:18:10.831 "name": "malloc0", 00:18:10.831 "num_blocks": 8192, 00:18:10.831 "block_size": 4096, 00:18:10.831 "physical_block_size": 4096, 00:18:10.831 "uuid": "9b5182e8-ac58-43ec-b9cd-15f4c6344d66", 00:18:10.831 "optimal_io_boundary": 0, 00:18:10.831 "md_size": 0, 00:18:10.831 "dif_type": 0, 00:18:10.831 "dif_is_head_of_md": false, 00:18:10.831 "dif_pi_format": 0 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "bdev_wait_for_examine" 00:18:10.831 } 00:18:10.831 ] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "nbd", 00:18:10.831 "config": [] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "scheduler", 00:18:10.831 "config": [ 00:18:10.831 { 00:18:10.831 "method": "framework_set_scheduler", 00:18:10.831 "params": { 00:18:10.831 "name": "static" 00:18:10.831 } 00:18:10.831 } 00:18:10.831 ] 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "subsystem": "nvmf", 00:18:10.831 "config": [ 00:18:10.831 { 00:18:10.831 "method": "nvmf_set_config", 00:18:10.831 "params": { 00:18:10.831 "discovery_filter": "match_any", 00:18:10.831 "admin_cmd_passthru": { 00:18:10.831 "identify_ctrlr": false 00:18:10.831 } 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "nvmf_set_max_subsystems", 00:18:10.831 "params": { 00:18:10.831 "max_subsystems": 1024 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "nvmf_set_crdt", 00:18:10.831 "params": { 00:18:10.831 "crdt1": 0, 00:18:10.831 "crdt2": 0, 00:18:10.831 "crdt3": 0 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "nvmf_create_transport", 00:18:10.831 "params": { 00:18:10.831 "trtype": "TCP", 00:18:10.831 "max_queue_depth": 128, 00:18:10.831 "max_io_qpairs_per_ctrlr": 127, 00:18:10.831 "in_capsule_data_size": 4096, 00:18:10.831 "max_io_size": 131072, 00:18:10.831 "io_unit_size": 131072, 00:18:10.831 "max_aq_depth": 128, 00:18:10.831 "num_shared_buffers": 511, 00:18:10.831 "buf_cache_size": 4294967295, 00:18:10.831 "dif_insert_or_strip": false, 00:18:10.831 "zcopy": false, 00:18:10.831 "c2h_success": false, 00:18:10.831 "sock_priority": 0, 00:18:10.831 "abort_timeout_sec": 1, 00:18:10.831 "ack_timeout": 0, 00:18:10.831 "data_wr_pool_size": 0 00:18:10.831 } 00:18:10.831 }, 00:18:10.831 { 00:18:10.831 "method": "nvmf_create_subsystem", 00:18:10.831 "params": { 00:18:10.831 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.831 "allow_any_host": false, 00:18:10.832 "serial_number": "SPDK00000000000001", 00:18:10.832 "model_number": "SPDK bdev Controller", 00:18:10.832 "max_namespaces": 10, 00:18:10.832 "min_cntlid": 1, 00:18:10.832 "max_cntlid": 65519, 00:18:10.832 "ana_reporting": false 00:18:10.832 } 00:18:10.832 }, 00:18:10.832 { 00:18:10.832 "method": "nvmf_subsystem_add_host", 00:18:10.832 "params": { 00:18:10.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.832 "host": "nqn.2016-06.io.spdk:host1", 00:18:10.832 "psk": "/tmp/tmp.3vuAI12gxd" 00:18:10.832 } 00:18:10.832 }, 00:18:10.832 { 00:18:10.832 "method": "nvmf_subsystem_add_ns", 00:18:10.832 "params": { 00:18:10.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.832 "namespace": { 00:18:10.832 "nsid": 1, 00:18:10.832 "bdev_name": "malloc0", 00:18:10.832 "nguid": "9B5182E8AC5843ECB9CD15F4C6344D66", 00:18:10.832 "uuid": "9b5182e8-ac58-43ec-b9cd-15f4c6344d66", 00:18:10.832 "no_auto_visible": false 00:18:10.832 } 00:18:10.832 } 00:18:10.832 }, 00:18:10.832 { 00:18:10.832 "method": "nvmf_subsystem_add_listener", 00:18:10.832 "params": { 00:18:10.832 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:10.832 "listen_address": { 00:18:10.832 "trtype": "TCP", 00:18:10.832 "adrfam": "IPv4", 00:18:10.832 "traddr": "10.0.0.2", 00:18:10.832 "trsvcid": "4420" 00:18:10.832 }, 00:18:10.832 "secure_channel": true 00:18:10.832 } 00:18:10.832 } 00:18:10.832 ] 00:18:10.832 } 00:18:10.832 ] 00:18:10.832 }' 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=939979 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 939979 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 939979 ']' 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.832 14:19:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.091 [2024-07-25 14:19:40.481841] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:11.092 [2024-07-25 14:19:40.481927] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.092 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.092 [2024-07-25 14:19:40.542835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.092 [2024-07-25 14:19:40.647509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.092 [2024-07-25 14:19:40.647575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.092 [2024-07-25 14:19:40.647589] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.092 [2024-07-25 14:19:40.647600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.092 [2024-07-25 14:19:40.647609] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.092 [2024-07-25 14:19:40.647698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.353 [2024-07-25 14:19:40.875770] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.353 [2024-07-25 14:19:40.900396] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.353 [2024-07-25 14:19:40.916462] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:11.353 [2024-07-25 14:19:40.916716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=940129 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 940129 /var/tmp/bdevperf.sock 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 940129 ']' 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.918 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:11.918 "subsystems": [ 00:18:11.918 { 00:18:11.918 "subsystem": "keyring", 00:18:11.918 "config": [] 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "subsystem": "iobuf", 00:18:11.918 "config": [ 00:18:11.918 { 00:18:11.918 "method": "iobuf_set_options", 00:18:11.918 "params": { 00:18:11.918 "small_pool_count": 8192, 00:18:11.918 "large_pool_count": 1024, 00:18:11.918 "small_bufsize": 8192, 00:18:11.918 "large_bufsize": 135168 00:18:11.918 } 00:18:11.918 } 00:18:11.918 ] 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "subsystem": "sock", 00:18:11.918 "config": [ 00:18:11.918 { 00:18:11.918 "method": "sock_set_default_impl", 00:18:11.918 "params": { 00:18:11.918 "impl_name": "posix" 00:18:11.918 } 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "method": "sock_impl_set_options", 00:18:11.918 "params": { 00:18:11.918 "impl_name": "ssl", 00:18:11.918 "recv_buf_size": 4096, 00:18:11.918 "send_buf_size": 4096, 00:18:11.918 "enable_recv_pipe": true, 00:18:11.918 "enable_quickack": false, 00:18:11.918 "enable_placement_id": 0, 00:18:11.918 "enable_zerocopy_send_server": true, 00:18:11.918 "enable_zerocopy_send_client": false, 00:18:11.918 "zerocopy_threshold": 0, 00:18:11.918 "tls_version": 0, 00:18:11.918 "enable_ktls": false 00:18:11.918 } 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "method": "sock_impl_set_options", 00:18:11.918 "params": { 00:18:11.918 "impl_name": "posix", 00:18:11.918 "recv_buf_size": 2097152, 00:18:11.918 "send_buf_size": 2097152, 00:18:11.918 "enable_recv_pipe": true, 00:18:11.918 "enable_quickack": false, 00:18:11.918 "enable_placement_id": 0, 00:18:11.918 "enable_zerocopy_send_server": true, 00:18:11.918 "enable_zerocopy_send_client": false, 00:18:11.918 "zerocopy_threshold": 0, 00:18:11.918 "tls_version": 0, 00:18:11.918 "enable_ktls": false 00:18:11.918 } 00:18:11.918 } 00:18:11.918 ] 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "subsystem": "vmd", 00:18:11.918 "config": [] 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "subsystem": "accel", 00:18:11.918 "config": [ 00:18:11.918 { 00:18:11.918 "method": "accel_set_options", 00:18:11.918 "params": { 00:18:11.918 "small_cache_size": 128, 00:18:11.918 "large_cache_size": 16, 00:18:11.918 "task_count": 2048, 00:18:11.918 "sequence_count": 2048, 00:18:11.918 "buf_count": 2048 00:18:11.918 } 00:18:11.918 } 00:18:11.918 ] 00:18:11.918 }, 00:18:11.918 { 00:18:11.918 "subsystem": "bdev", 00:18:11.918 "config": [ 00:18:11.918 { 00:18:11.918 "method": "bdev_set_options", 00:18:11.918 "params": { 00:18:11.918 "bdev_io_pool_size": 65535, 00:18:11.918 "bdev_io_cache_size": 256, 00:18:11.918 "bdev_auto_examine": true, 00:18:11.918 "iobuf_small_cache_size": 128, 00:18:11.918 "iobuf_large_cache_size": 16 00:18:11.918 } 00:18:11.918 }, 00:18:11.918 { 00:18:11.919 "method": "bdev_raid_set_options", 00:18:11.919 "params": { 00:18:11.919 "process_window_size_kb": 1024, 00:18:11.919 "process_max_bandwidth_mb_sec": 0 00:18:11.919 } 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "method": "bdev_iscsi_set_options", 00:18:11.919 "params": { 00:18:11.919 "timeout_sec": 30 00:18:11.919 } 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "method": "bdev_nvme_set_options", 00:18:11.919 "params": { 00:18:11.919 "action_on_timeout": "none", 00:18:11.919 "timeout_us": 0, 00:18:11.919 "timeout_admin_us": 0, 00:18:11.919 "keep_alive_timeout_ms": 10000, 00:18:11.919 "arbitration_burst": 0, 00:18:11.919 "low_priority_weight": 0, 00:18:11.919 "medium_priority_weight": 0, 00:18:11.919 "high_priority_weight": 0, 00:18:11.919 "nvme_adminq_poll_period_us": 10000, 00:18:11.919 "nvme_ioq_poll_period_us": 0, 00:18:11.919 "io_queue_requests": 512, 00:18:11.919 "delay_cmd_submit": true, 00:18:11.919 "transport_retry_count": 4, 00:18:11.919 "bdev_retry_count": 3, 00:18:11.919 "transport_ack_timeout": 0, 00:18:11.919 "ctrlr_loss_timeout_sec": 0, 00:18:11.919 "reconnect_delay_sec": 0, 00:18:11.919 "fast_io_fail_timeout_sec": 0, 00:18:11.919 "disable_auto_failback": false, 00:18:11.919 "generate_uuids": false, 00:18:11.919 "transport_tos": 0, 00:18:11.919 "nvme_error_stat": false, 00:18:11.919 "rdma_srq_size": 0, 00:18:11.919 "io_path_stat": false, 00:18:11.919 "allow_accel_sequence": false, 00:18:11.919 "rdma_max_cq_size": 0, 00:18:11.919 "rdma_cm_event_timeout_ms": 0, 00:18:11.919 "dhchap_digests": [ 00:18:11.919 "sha256", 00:18:11.919 "sha384", 00:18:11.919 "sha512" 00:18:11.919 ], 00:18:11.919 "dhchap_dhgroups": [ 00:18:11.919 "null", 00:18:11.919 "ffdhe2048", 00:18:11.919 "ffdhe3072", 00:18:11.919 "ffdhe4096", 00:18:11.919 "ffdhe6144", 00:18:11.919 "ffdhe8192" 00:18:11.919 ] 00:18:11.919 } 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "method": "bdev_nvme_attach_controller", 00:18:11.919 "params": { 00:18:11.919 "name": "TLSTEST", 00:18:11.919 "trtype": "TCP", 00:18:11.919 "adrfam": "IPv4", 00:18:11.919 "traddr": "10.0.0.2", 00:18:11.919 "trsvcid": "4420", 00:18:11.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.919 "prchk_reftag": false, 00:18:11.919 "prchk_guard": false, 00:18:11.919 "ctrlr_loss_timeout_sec": 0, 00:18:11.919 "reconnect_delay_sec": 0, 00:18:11.919 "fast_io_fail_timeout_sec": 0, 00:18:11.919 "psk": "/tmp/tmp.3vuAI12gxd", 00:18:11.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.919 "hdgst": false, 00:18:11.919 "ddgst": false 00:18:11.919 } 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "method": "bdev_nvme_set_hotplug", 00:18:11.919 "params": { 00:18:11.919 "period_us": 100000, 00:18:11.919 "enable": false 00:18:11.919 } 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "method": "bdev_wait_for_examine" 00:18:11.919 } 00:18:11.919 ] 00:18:11.919 }, 00:18:11.919 { 00:18:11.919 "subsystem": "nbd", 00:18:11.919 "config": [] 00:18:11.919 } 00:18:11.919 ] 00:18:11.919 }' 00:18:11.919 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.919 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.919 14:19:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.919 [2024-07-25 14:19:41.554025] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:11.919 [2024-07-25 14:19:41.554128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940129 ] 00:18:12.176 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.176 [2024-07-25 14:19:41.612285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.176 [2024-07-25 14:19:41.717505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.433 [2024-07-25 14:19:41.879703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.433 [2024-07-25 14:19:41.879837] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:13.000 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:13.000 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:13.000 14:19:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:13.000 Running I/O for 10 seconds... 00:18:25.209 00:18:25.209 Latency(us) 00:18:25.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.209 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.209 Verification LBA range: start 0x0 length 0x2000 00:18:25.209 TLSTESTn1 : 10.03 3481.24 13.60 0.00 0.00 36692.62 5825.42 31457.28 00:18:25.209 =================================================================================================================== 00:18:25.209 Total : 3481.24 13.60 0.00 0.00 36692.62 5825.42 31457.28 00:18:25.209 0 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 940129 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 940129 ']' 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 940129 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 940129 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 940129' 00:18:25.209 killing process with pid 940129 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 940129 00:18:25.209 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.209 00:18:25.209 Latency(us) 00:18:25.209 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.209 =================================================================================================================== 00:18:25.209 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.209 [2024-07-25 14:19:52.715820] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 940129 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 939979 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 939979 ']' 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 939979 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.209 14:19:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 939979 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 939979' 00:18:25.209 killing process with pid 939979 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 939979 00:18:25.209 [2024-07-25 14:19:53.014494] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 939979 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=941462 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 941462 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941462 ']' 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.209 [2024-07-25 14:19:53.344148] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:25.209 [2024-07-25 14:19:53.344244] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.209 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.209 [2024-07-25 14:19:53.407772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.209 [2024-07-25 14:19:53.513181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.209 [2024-07-25 14:19:53.513235] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.209 [2024-07-25 14:19:53.513259] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.209 [2024-07-25 14:19:53.513271] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.209 [2024-07-25 14:19:53.513281] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.209 [2024-07-25 14:19:53.513307] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.3vuAI12gxd 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.3vuAI12gxd 00:18:25.209 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:25.209 [2024-07-25 14:19:53.867856] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.210 14:19:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:25.210 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:25.210 [2024-07-25 14:19:54.337078] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:25.210 [2024-07-25 14:19:54.337298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.210 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:25.210 malloc0 00:18:25.210 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:25.210 14:19:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.3vuAI12gxd 00:18:25.468 [2024-07-25 14:19:55.065432] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=941746 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 941746 /var/tmp/bdevperf.sock 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941746 ']' 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:25.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:25.468 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.781 [2024-07-25 14:19:55.126441] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:25.781 [2024-07-25 14:19:55.126512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid941746 ] 00:18:25.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:25.781 [2024-07-25 14:19:55.198921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.781 [2024-07-25 14:19:55.332616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.039 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.039 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:26.039 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3vuAI12gxd 00:18:26.039 14:19:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:26.299 [2024-07-25 14:19:55.924563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:26.558 nvme0n1 00:18:26.558 14:19:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:26.558 Running I/O for 1 seconds... 00:18:27.936 00:18:27.936 Latency(us) 00:18:27.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.936 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:27.936 Verification LBA range: start 0x0 length 0x2000 00:18:27.936 nvme0n1 : 1.02 3607.13 14.09 0.00 0.00 35138.84 6262.33 26796.94 00:18:27.936 =================================================================================================================== 00:18:27.936 Total : 3607.13 14.09 0.00 0.00 35138.84 6262.33 26796.94 00:18:27.936 0 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 941746 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941746 ']' 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941746 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941746 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:27.936 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941746' 00:18:27.936 killing process with pid 941746 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941746 00:18:27.937 Received shutdown signal, test time was about 1.000000 seconds 00:18:27.937 00:18:27.937 Latency(us) 00:18:27.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:27.937 =================================================================================================================== 00:18:27.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941746 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 941462 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941462 ']' 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941462 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941462 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941462' 00:18:27.937 killing process with pid 941462 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941462 00:18:27.937 [2024-07-25 14:19:57.490713] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:27.937 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941462 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=942027 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 942027 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 942027 ']' 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.196 14:19:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.196 [2024-07-25 14:19:57.818897] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:28.196 [2024-07-25 14:19:57.818992] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.454 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.454 [2024-07-25 14:19:57.882325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.454 [2024-07-25 14:19:57.978581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.454 [2024-07-25 14:19:57.978655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.454 [2024-07-25 14:19:57.978675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.454 [2024-07-25 14:19:57.978686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.454 [2024-07-25 14:19:57.978696] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.454 [2024-07-25 14:19:57.978723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.454 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.454 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.454 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.454 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.454 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.713 [2024-07-25 14:19:58.117786] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.713 malloc0 00:18:28.713 [2024-07-25 14:19:58.149457] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:28.713 [2024-07-25 14:19:58.164273] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=942058 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 942058 /var/tmp/bdevperf.sock 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 942058 ']' 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.713 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.713 [2024-07-25 14:19:58.228738] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:28.713 [2024-07-25 14:19:58.228800] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942058 ] 00:18:28.713 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.713 [2024-07-25 14:19:58.287455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.971 [2024-07-25 14:19:58.394748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.971 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.971 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.971 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.3vuAI12gxd 00:18:29.229 14:19:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.487 [2024-07-25 14:19:58.972594] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.487 nvme0n1 00:18:29.487 14:19:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.745 Running I/O for 1 seconds... 00:18:30.682 00:18:30.682 Latency(us) 00:18:30.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.682 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.682 Verification LBA range: start 0x0 length 0x2000 00:18:30.682 nvme0n1 : 1.02 3454.89 13.50 0.00 0.00 36717.30 6699.24 38059.43 00:18:30.682 =================================================================================================================== 00:18:30.682 Total : 3454.89 13.50 0.00 0.00 36717.30 6699.24 38059.43 00:18:30.682 0 00:18:30.682 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:30.682 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.682 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.682 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.682 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:30.682 "subsystems": [ 00:18:30.682 { 00:18:30.682 "subsystem": "keyring", 00:18:30.682 "config": [ 00:18:30.682 { 00:18:30.682 "method": "keyring_file_add_key", 00:18:30.682 "params": { 00:18:30.682 "name": "key0", 00:18:30.682 "path": "/tmp/tmp.3vuAI12gxd" 00:18:30.682 } 00:18:30.682 } 00:18:30.682 ] 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "subsystem": "iobuf", 00:18:30.682 "config": [ 00:18:30.682 { 00:18:30.682 "method": "iobuf_set_options", 00:18:30.682 "params": { 00:18:30.682 "small_pool_count": 8192, 00:18:30.682 "large_pool_count": 1024, 00:18:30.682 "small_bufsize": 8192, 00:18:30.682 "large_bufsize": 135168 00:18:30.682 } 00:18:30.682 } 00:18:30.682 ] 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "subsystem": "sock", 00:18:30.682 "config": [ 00:18:30.682 { 00:18:30.682 "method": "sock_set_default_impl", 00:18:30.682 "params": { 00:18:30.682 "impl_name": "posix" 00:18:30.682 } 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "method": "sock_impl_set_options", 00:18:30.682 "params": { 00:18:30.682 "impl_name": "ssl", 00:18:30.682 "recv_buf_size": 4096, 00:18:30.682 "send_buf_size": 4096, 00:18:30.682 "enable_recv_pipe": true, 00:18:30.682 "enable_quickack": false, 00:18:30.682 "enable_placement_id": 0, 00:18:30.682 "enable_zerocopy_send_server": true, 00:18:30.682 "enable_zerocopy_send_client": false, 00:18:30.682 "zerocopy_threshold": 0, 00:18:30.682 "tls_version": 0, 00:18:30.682 "enable_ktls": false 00:18:30.682 } 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "method": "sock_impl_set_options", 00:18:30.682 "params": { 00:18:30.682 "impl_name": "posix", 00:18:30.682 "recv_buf_size": 2097152, 00:18:30.682 "send_buf_size": 2097152, 00:18:30.682 "enable_recv_pipe": true, 00:18:30.682 "enable_quickack": false, 00:18:30.682 "enable_placement_id": 0, 00:18:30.682 "enable_zerocopy_send_server": true, 00:18:30.682 "enable_zerocopy_send_client": false, 00:18:30.682 "zerocopy_threshold": 0, 00:18:30.682 "tls_version": 0, 00:18:30.682 "enable_ktls": false 00:18:30.682 } 00:18:30.682 } 00:18:30.682 ] 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "subsystem": "vmd", 00:18:30.682 "config": [] 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "subsystem": "accel", 00:18:30.682 "config": [ 00:18:30.682 { 00:18:30.682 "method": "accel_set_options", 00:18:30.682 "params": { 00:18:30.682 "small_cache_size": 128, 00:18:30.682 "large_cache_size": 16, 00:18:30.682 "task_count": 2048, 00:18:30.682 "sequence_count": 2048, 00:18:30.682 "buf_count": 2048 00:18:30.682 } 00:18:30.682 } 00:18:30.682 ] 00:18:30.682 }, 00:18:30.682 { 00:18:30.682 "subsystem": "bdev", 00:18:30.682 "config": [ 00:18:30.682 { 00:18:30.682 "method": "bdev_set_options", 00:18:30.682 "params": { 00:18:30.682 "bdev_io_pool_size": 65535, 00:18:30.682 "bdev_io_cache_size": 256, 00:18:30.682 "bdev_auto_examine": true, 00:18:30.682 "iobuf_small_cache_size": 128, 00:18:30.682 "iobuf_large_cache_size": 16 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_raid_set_options", 00:18:30.683 "params": { 00:18:30.683 "process_window_size_kb": 1024, 00:18:30.683 "process_max_bandwidth_mb_sec": 0 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_iscsi_set_options", 00:18:30.683 "params": { 00:18:30.683 "timeout_sec": 30 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_nvme_set_options", 00:18:30.683 "params": { 00:18:30.683 "action_on_timeout": "none", 00:18:30.683 "timeout_us": 0, 00:18:30.683 "timeout_admin_us": 0, 00:18:30.683 "keep_alive_timeout_ms": 10000, 00:18:30.683 "arbitration_burst": 0, 00:18:30.683 "low_priority_weight": 0, 00:18:30.683 "medium_priority_weight": 0, 00:18:30.683 "high_priority_weight": 0, 00:18:30.683 "nvme_adminq_poll_period_us": 10000, 00:18:30.683 "nvme_ioq_poll_period_us": 0, 00:18:30.683 "io_queue_requests": 0, 00:18:30.683 "delay_cmd_submit": true, 00:18:30.683 "transport_retry_count": 4, 00:18:30.683 "bdev_retry_count": 3, 00:18:30.683 "transport_ack_timeout": 0, 00:18:30.683 "ctrlr_loss_timeout_sec": 0, 00:18:30.683 "reconnect_delay_sec": 0, 00:18:30.683 "fast_io_fail_timeout_sec": 0, 00:18:30.683 "disable_auto_failback": false, 00:18:30.683 "generate_uuids": false, 00:18:30.683 "transport_tos": 0, 00:18:30.683 "nvme_error_stat": false, 00:18:30.683 "rdma_srq_size": 0, 00:18:30.683 "io_path_stat": false, 00:18:30.683 "allow_accel_sequence": false, 00:18:30.683 "rdma_max_cq_size": 0, 00:18:30.683 "rdma_cm_event_timeout_ms": 0, 00:18:30.683 "dhchap_digests": [ 00:18:30.683 "sha256", 00:18:30.683 "sha384", 00:18:30.683 "sha512" 00:18:30.683 ], 00:18:30.683 "dhchap_dhgroups": [ 00:18:30.683 "null", 00:18:30.683 "ffdhe2048", 00:18:30.683 "ffdhe3072", 00:18:30.683 "ffdhe4096", 00:18:30.683 "ffdhe6144", 00:18:30.683 "ffdhe8192" 00:18:30.683 ] 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_nvme_set_hotplug", 00:18:30.683 "params": { 00:18:30.683 "period_us": 100000, 00:18:30.683 "enable": false 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_malloc_create", 00:18:30.683 "params": { 00:18:30.683 "name": "malloc0", 00:18:30.683 "num_blocks": 8192, 00:18:30.683 "block_size": 4096, 00:18:30.683 "physical_block_size": 4096, 00:18:30.683 "uuid": "06089d0c-76a8-47d4-9bcf-2f2215eb19cf", 00:18:30.683 "optimal_io_boundary": 0, 00:18:30.683 "md_size": 0, 00:18:30.683 "dif_type": 0, 00:18:30.683 "dif_is_head_of_md": false, 00:18:30.683 "dif_pi_format": 0 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "bdev_wait_for_examine" 00:18:30.683 } 00:18:30.683 ] 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "subsystem": "nbd", 00:18:30.683 "config": [] 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "subsystem": "scheduler", 00:18:30.683 "config": [ 00:18:30.683 { 00:18:30.683 "method": "framework_set_scheduler", 00:18:30.683 "params": { 00:18:30.683 "name": "static" 00:18:30.683 } 00:18:30.683 } 00:18:30.683 ] 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "subsystem": "nvmf", 00:18:30.683 "config": [ 00:18:30.683 { 00:18:30.683 "method": "nvmf_set_config", 00:18:30.683 "params": { 00:18:30.683 "discovery_filter": "match_any", 00:18:30.683 "admin_cmd_passthru": { 00:18:30.683 "identify_ctrlr": false 00:18:30.683 } 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_set_max_subsystems", 00:18:30.683 "params": { 00:18:30.683 "max_subsystems": 1024 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_set_crdt", 00:18:30.683 "params": { 00:18:30.683 "crdt1": 0, 00:18:30.683 "crdt2": 0, 00:18:30.683 "crdt3": 0 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_create_transport", 00:18:30.683 "params": { 00:18:30.683 "trtype": "TCP", 00:18:30.683 "max_queue_depth": 128, 00:18:30.683 "max_io_qpairs_per_ctrlr": 127, 00:18:30.683 "in_capsule_data_size": 4096, 00:18:30.683 "max_io_size": 131072, 00:18:30.683 "io_unit_size": 131072, 00:18:30.683 "max_aq_depth": 128, 00:18:30.683 "num_shared_buffers": 511, 00:18:30.683 "buf_cache_size": 4294967295, 00:18:30.683 "dif_insert_or_strip": false, 00:18:30.683 "zcopy": false, 00:18:30.683 "c2h_success": false, 00:18:30.683 "sock_priority": 0, 00:18:30.683 "abort_timeout_sec": 1, 00:18:30.683 "ack_timeout": 0, 00:18:30.683 "data_wr_pool_size": 0 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_create_subsystem", 00:18:30.683 "params": { 00:18:30.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.683 "allow_any_host": false, 00:18:30.683 "serial_number": "00000000000000000000", 00:18:30.683 "model_number": "SPDK bdev Controller", 00:18:30.683 "max_namespaces": 32, 00:18:30.683 "min_cntlid": 1, 00:18:30.683 "max_cntlid": 65519, 00:18:30.683 "ana_reporting": false 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_subsystem_add_host", 00:18:30.683 "params": { 00:18:30.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.683 "host": "nqn.2016-06.io.spdk:host1", 00:18:30.683 "psk": "key0" 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_subsystem_add_ns", 00:18:30.683 "params": { 00:18:30.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.683 "namespace": { 00:18:30.683 "nsid": 1, 00:18:30.683 "bdev_name": "malloc0", 00:18:30.683 "nguid": "06089D0C76A847D49BCF2F2215EB19CF", 00:18:30.683 "uuid": "06089d0c-76a8-47d4-9bcf-2f2215eb19cf", 00:18:30.683 "no_auto_visible": false 00:18:30.683 } 00:18:30.683 } 00:18:30.683 }, 00:18:30.683 { 00:18:30.683 "method": "nvmf_subsystem_add_listener", 00:18:30.683 "params": { 00:18:30.683 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.683 "listen_address": { 00:18:30.683 "trtype": "TCP", 00:18:30.683 "adrfam": "IPv4", 00:18:30.683 "traddr": "10.0.0.2", 00:18:30.683 "trsvcid": "4420" 00:18:30.683 }, 00:18:30.683 "secure_channel": false, 00:18:30.683 "sock_impl": "ssl" 00:18:30.683 } 00:18:30.683 } 00:18:30.683 ] 00:18:30.683 } 00:18:30.683 ] 00:18:30.683 }' 00:18:30.683 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:31.251 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:31.251 "subsystems": [ 00:18:31.251 { 00:18:31.251 "subsystem": "keyring", 00:18:31.251 "config": [ 00:18:31.251 { 00:18:31.251 "method": "keyring_file_add_key", 00:18:31.251 "params": { 00:18:31.251 "name": "key0", 00:18:31.251 "path": "/tmp/tmp.3vuAI12gxd" 00:18:31.251 } 00:18:31.251 } 00:18:31.251 ] 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "subsystem": "iobuf", 00:18:31.251 "config": [ 00:18:31.251 { 00:18:31.251 "method": "iobuf_set_options", 00:18:31.251 "params": { 00:18:31.251 "small_pool_count": 8192, 00:18:31.251 "large_pool_count": 1024, 00:18:31.251 "small_bufsize": 8192, 00:18:31.251 "large_bufsize": 135168 00:18:31.251 } 00:18:31.251 } 00:18:31.251 ] 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "subsystem": "sock", 00:18:31.251 "config": [ 00:18:31.251 { 00:18:31.251 "method": "sock_set_default_impl", 00:18:31.251 "params": { 00:18:31.251 "impl_name": "posix" 00:18:31.251 } 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "method": "sock_impl_set_options", 00:18:31.251 "params": { 00:18:31.251 "impl_name": "ssl", 00:18:31.251 "recv_buf_size": 4096, 00:18:31.251 "send_buf_size": 4096, 00:18:31.251 "enable_recv_pipe": true, 00:18:31.251 "enable_quickack": false, 00:18:31.251 "enable_placement_id": 0, 00:18:31.251 "enable_zerocopy_send_server": true, 00:18:31.251 "enable_zerocopy_send_client": false, 00:18:31.251 "zerocopy_threshold": 0, 00:18:31.251 "tls_version": 0, 00:18:31.251 "enable_ktls": false 00:18:31.251 } 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "method": "sock_impl_set_options", 00:18:31.251 "params": { 00:18:31.251 "impl_name": "posix", 00:18:31.251 "recv_buf_size": 2097152, 00:18:31.251 "send_buf_size": 2097152, 00:18:31.251 "enable_recv_pipe": true, 00:18:31.251 "enable_quickack": false, 00:18:31.251 "enable_placement_id": 0, 00:18:31.251 "enable_zerocopy_send_server": true, 00:18:31.251 "enable_zerocopy_send_client": false, 00:18:31.251 "zerocopy_threshold": 0, 00:18:31.251 "tls_version": 0, 00:18:31.251 "enable_ktls": false 00:18:31.251 } 00:18:31.251 } 00:18:31.251 ] 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "subsystem": "vmd", 00:18:31.251 "config": [] 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "subsystem": "accel", 00:18:31.251 "config": [ 00:18:31.251 { 00:18:31.251 "method": "accel_set_options", 00:18:31.251 "params": { 00:18:31.251 "small_cache_size": 128, 00:18:31.251 "large_cache_size": 16, 00:18:31.251 "task_count": 2048, 00:18:31.251 "sequence_count": 2048, 00:18:31.251 "buf_count": 2048 00:18:31.251 } 00:18:31.251 } 00:18:31.251 ] 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "subsystem": "bdev", 00:18:31.251 "config": [ 00:18:31.251 { 00:18:31.251 "method": "bdev_set_options", 00:18:31.251 "params": { 00:18:31.251 "bdev_io_pool_size": 65535, 00:18:31.251 "bdev_io_cache_size": 256, 00:18:31.251 "bdev_auto_examine": true, 00:18:31.251 "iobuf_small_cache_size": 128, 00:18:31.251 "iobuf_large_cache_size": 16 00:18:31.251 } 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "method": "bdev_raid_set_options", 00:18:31.251 "params": { 00:18:31.251 "process_window_size_kb": 1024, 00:18:31.251 "process_max_bandwidth_mb_sec": 0 00:18:31.251 } 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "method": "bdev_iscsi_set_options", 00:18:31.251 "params": { 00:18:31.251 "timeout_sec": 30 00:18:31.251 } 00:18:31.251 }, 00:18:31.251 { 00:18:31.251 "method": "bdev_nvme_set_options", 00:18:31.251 "params": { 00:18:31.251 "action_on_timeout": "none", 00:18:31.251 "timeout_us": 0, 00:18:31.251 "timeout_admin_us": 0, 00:18:31.251 "keep_alive_timeout_ms": 10000, 00:18:31.251 "arbitration_burst": 0, 00:18:31.251 "low_priority_weight": 0, 00:18:31.251 "medium_priority_weight": 0, 00:18:31.251 "high_priority_weight": 0, 00:18:31.251 "nvme_adminq_poll_period_us": 10000, 00:18:31.251 "nvme_ioq_poll_period_us": 0, 00:18:31.251 "io_queue_requests": 512, 00:18:31.251 "delay_cmd_submit": true, 00:18:31.251 "transport_retry_count": 4, 00:18:31.251 "bdev_retry_count": 3, 00:18:31.251 "transport_ack_timeout": 0, 00:18:31.251 "ctrlr_loss_timeout_sec": 0, 00:18:31.251 "reconnect_delay_sec": 0, 00:18:31.251 "fast_io_fail_timeout_sec": 0, 00:18:31.251 "disable_auto_failback": false, 00:18:31.251 "generate_uuids": false, 00:18:31.251 "transport_tos": 0, 00:18:31.252 "nvme_error_stat": false, 00:18:31.252 "rdma_srq_size": 0, 00:18:31.252 "io_path_stat": false, 00:18:31.252 "allow_accel_sequence": false, 00:18:31.252 "rdma_max_cq_size": 0, 00:18:31.252 "rdma_cm_event_timeout_ms": 0, 00:18:31.252 "dhchap_digests": [ 00:18:31.252 "sha256", 00:18:31.252 "sha384", 00:18:31.252 "sha512" 00:18:31.252 ], 00:18:31.252 "dhchap_dhgroups": [ 00:18:31.252 "null", 00:18:31.252 "ffdhe2048", 00:18:31.252 "ffdhe3072", 00:18:31.252 "ffdhe4096", 00:18:31.252 "ffdhe6144", 00:18:31.252 "ffdhe8192" 00:18:31.252 ] 00:18:31.252 } 00:18:31.252 }, 00:18:31.252 { 00:18:31.252 "method": "bdev_nvme_attach_controller", 00:18:31.252 "params": { 00:18:31.252 "name": "nvme0", 00:18:31.252 "trtype": "TCP", 00:18:31.252 "adrfam": "IPv4", 00:18:31.252 "traddr": "10.0.0.2", 00:18:31.252 "trsvcid": "4420", 00:18:31.252 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.252 "prchk_reftag": false, 00:18:31.252 "prchk_guard": false, 00:18:31.252 "ctrlr_loss_timeout_sec": 0, 00:18:31.252 "reconnect_delay_sec": 0, 00:18:31.252 "fast_io_fail_timeout_sec": 0, 00:18:31.252 "psk": "key0", 00:18:31.252 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.252 "hdgst": false, 00:18:31.252 "ddgst": false 00:18:31.252 } 00:18:31.252 }, 00:18:31.252 { 00:18:31.252 "method": "bdev_nvme_set_hotplug", 00:18:31.252 "params": { 00:18:31.252 "period_us": 100000, 00:18:31.252 "enable": false 00:18:31.252 } 00:18:31.252 }, 00:18:31.252 { 00:18:31.252 "method": "bdev_enable_histogram", 00:18:31.252 "params": { 00:18:31.252 "name": "nvme0n1", 00:18:31.252 "enable": true 00:18:31.252 } 00:18:31.252 }, 00:18:31.252 { 00:18:31.252 "method": "bdev_wait_for_examine" 00:18:31.252 } 00:18:31.252 ] 00:18:31.252 }, 00:18:31.252 { 00:18:31.252 "subsystem": "nbd", 00:18:31.252 "config": [] 00:18:31.252 } 00:18:31.252 ] 00:18:31.252 }' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 942058 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 942058 ']' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 942058 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942058 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942058' 00:18:31.252 killing process with pid 942058 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 942058 00:18:31.252 Received shutdown signal, test time was about 1.000000 seconds 00:18:31.252 00:18:31.252 Latency(us) 00:18:31.252 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.252 =================================================================================================================== 00:18:31.252 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 942058 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 942027 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 942027 ']' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 942027 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:31.252 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942027 00:18:31.511 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:31.511 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:31.511 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942027' 00:18:31.511 killing process with pid 942027 00:18:31.511 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 942027 00:18:31.511 14:20:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 942027 00:18:31.769 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:31.769 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.769 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:31.769 "subsystems": [ 00:18:31.769 { 00:18:31.769 "subsystem": "keyring", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "keyring_file_add_key", 00:18:31.769 "params": { 00:18:31.769 "name": "key0", 00:18:31.769 "path": "/tmp/tmp.3vuAI12gxd" 00:18:31.769 } 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "iobuf", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "iobuf_set_options", 00:18:31.769 "params": { 00:18:31.769 "small_pool_count": 8192, 00:18:31.769 "large_pool_count": 1024, 00:18:31.769 "small_bufsize": 8192, 00:18:31.769 "large_bufsize": 135168 00:18:31.769 } 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "sock", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "sock_set_default_impl", 00:18:31.769 "params": { 00:18:31.769 "impl_name": "posix" 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "sock_impl_set_options", 00:18:31.769 "params": { 00:18:31.769 "impl_name": "ssl", 00:18:31.769 "recv_buf_size": 4096, 00:18:31.769 "send_buf_size": 4096, 00:18:31.769 "enable_recv_pipe": true, 00:18:31.769 "enable_quickack": false, 00:18:31.769 "enable_placement_id": 0, 00:18:31.769 "enable_zerocopy_send_server": true, 00:18:31.769 "enable_zerocopy_send_client": false, 00:18:31.769 "zerocopy_threshold": 0, 00:18:31.769 "tls_version": 0, 00:18:31.769 "enable_ktls": false 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "sock_impl_set_options", 00:18:31.769 "params": { 00:18:31.769 "impl_name": "posix", 00:18:31.769 "recv_buf_size": 2097152, 00:18:31.769 "send_buf_size": 2097152, 00:18:31.769 "enable_recv_pipe": true, 00:18:31.769 "enable_quickack": false, 00:18:31.769 "enable_placement_id": 0, 00:18:31.769 "enable_zerocopy_send_server": true, 00:18:31.769 "enable_zerocopy_send_client": false, 00:18:31.769 "zerocopy_threshold": 0, 00:18:31.769 "tls_version": 0, 00:18:31.769 "enable_ktls": false 00:18:31.769 } 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "vmd", 00:18:31.769 "config": [] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "accel", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "accel_set_options", 00:18:31.769 "params": { 00:18:31.769 "small_cache_size": 128, 00:18:31.769 "large_cache_size": 16, 00:18:31.769 "task_count": 2048, 00:18:31.769 "sequence_count": 2048, 00:18:31.769 "buf_count": 2048 00:18:31.769 } 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "bdev", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "bdev_set_options", 00:18:31.769 "params": { 00:18:31.769 "bdev_io_pool_size": 65535, 00:18:31.769 "bdev_io_cache_size": 256, 00:18:31.769 "bdev_auto_examine": true, 00:18:31.769 "iobuf_small_cache_size": 128, 00:18:31.769 "iobuf_large_cache_size": 16 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_raid_set_options", 00:18:31.769 "params": { 00:18:31.769 "process_window_size_kb": 1024, 00:18:31.769 "process_max_bandwidth_mb_sec": 0 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_iscsi_set_options", 00:18:31.769 "params": { 00:18:31.769 "timeout_sec": 30 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_nvme_set_options", 00:18:31.769 "params": { 00:18:31.769 "action_on_timeout": "none", 00:18:31.769 "timeout_us": 0, 00:18:31.769 "timeout_admin_us": 0, 00:18:31.769 "keep_alive_timeout_ms": 10000, 00:18:31.769 "arbitration_burst": 0, 00:18:31.769 "low_priority_weight": 0, 00:18:31.769 "medium_priority_weight": 0, 00:18:31.769 "high_priority_weight": 0, 00:18:31.769 "nvme_adminq_poll_period_us": 10000, 00:18:31.769 "nvme_ioq_poll_period_us": 0, 00:18:31.769 "io_queue_requests": 0, 00:18:31.769 "delay_cmd_submit": true, 00:18:31.769 "transport_retry_count": 4, 00:18:31.769 "bdev_retry_count": 3, 00:18:31.769 "transport_ack_timeout": 0, 00:18:31.769 "ctrlr_loss_timeout_sec": 0, 00:18:31.769 "reconnect_delay_sec": 0, 00:18:31.769 "fast_io_fail_timeout_sec": 0, 00:18:31.769 "disable_auto_failback": false, 00:18:31.769 "generate_uuids": false, 00:18:31.769 "transport_tos": 0, 00:18:31.769 "nvme_error_stat": false, 00:18:31.769 "rdma_srq_size": 0, 00:18:31.769 "io_path_stat": false, 00:18:31.769 "allow_accel_sequence": false, 00:18:31.769 "rdma_max_cq_size": 0, 00:18:31.769 "rdma_cm_event_timeout_ms": 0, 00:18:31.769 "dhchap_digests": [ 00:18:31.769 "sha256", 00:18:31.769 "sha384", 00:18:31.769 "sha512" 00:18:31.769 ], 00:18:31.769 "dhchap_dhgroups": [ 00:18:31.769 "null", 00:18:31.769 "ffdhe2048", 00:18:31.769 "ffdhe3072", 00:18:31.769 "ffdhe4096", 00:18:31.769 "ffdhe6144", 00:18:31.769 "ffdhe8192" 00:18:31.769 ] 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_nvme_set_hotplug", 00:18:31.769 "params": { 00:18:31.769 "period_us": 100000, 00:18:31.769 "enable": false 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_malloc_create", 00:18:31.769 "params": { 00:18:31.769 "name": "malloc0", 00:18:31.769 "num_blocks": 8192, 00:18:31.769 "block_size": 4096, 00:18:31.769 "physical_block_size": 4096, 00:18:31.769 "uuid": "06089d0c-76a8-47d4-9bcf-2f2215eb19cf", 00:18:31.769 "optimal_io_boundary": 0, 00:18:31.769 "md_size": 0, 00:18:31.769 "dif_type": 0, 00:18:31.769 "dif_is_head_of_md": false, 00:18:31.769 "dif_pi_format": 0 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "bdev_wait_for_examine" 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "nbd", 00:18:31.769 "config": [] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "scheduler", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "framework_set_scheduler", 00:18:31.769 "params": { 00:18:31.769 "name": "static" 00:18:31.769 } 00:18:31.769 } 00:18:31.769 ] 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "subsystem": "nvmf", 00:18:31.769 "config": [ 00:18:31.769 { 00:18:31.769 "method": "nvmf_set_config", 00:18:31.769 "params": { 00:18:31.769 "discovery_filter": "match_any", 00:18:31.769 "admin_cmd_passthru": { 00:18:31.769 "identify_ctrlr": false 00:18:31.769 } 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "nvmf_set_max_subsystems", 00:18:31.769 "params": { 00:18:31.769 "max_subsystems": 1024 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "nvmf_set_crdt", 00:18:31.769 "params": { 00:18:31.769 "crdt1": 0, 00:18:31.769 "crdt2": 0, 00:18:31.769 "crdt3": 0 00:18:31.769 } 00:18:31.769 }, 00:18:31.769 { 00:18:31.769 "method": "nvmf_create_transport", 00:18:31.769 "params": { 00:18:31.769 "trtype": "TCP", 00:18:31.769 "max_queue_depth": 128, 00:18:31.769 "max_io_qpairs_per_ctrlr": 127, 00:18:31.769 "in_capsule_data_size": 4096, 00:18:31.769 "max_io_size": 131072, 00:18:31.769 "io_unit_size": 131072, 00:18:31.769 "max_aq_depth": 128, 00:18:31.769 "num_shared_buffers": 511, 00:18:31.769 "buf_cache_size": 4294967295, 00:18:31.769 "dif_insert_or_strip": false, 00:18:31.769 "zcopy": false, 00:18:31.769 "c2h_success": false, 00:18:31.769 "sock_priority": 0, 00:18:31.769 "abort_timeout_sec": 1, 00:18:31.769 "ack_timeout": 0, 00:18:31.769 "data_wr_pool_size": 0 00:18:31.769 } 00:18:31.770 }, 00:18:31.770 { 00:18:31.770 "method": "nvmf_create_subsystem", 00:18:31.770 "params": { 00:18:31.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.770 "allow_any_host": false, 00:18:31.770 "serial_number": "00000000000000000000", 00:18:31.770 "model_number": "SPDK bdev Controller", 00:18:31.770 "max_namespaces": 32, 00:18:31.770 "min_cntlid": 1, 00:18:31.770 "max_cntlid": 65519, 00:18:31.770 "ana_reporting": false 00:18:31.770 } 00:18:31.770 }, 00:18:31.770 { 00:18:31.770 "method": "nvmf_subsystem_add_host", 00:18:31.770 "params": { 00:18:31.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.770 "host": "nqn.2016-06.io.spdk:host1", 00:18:31.770 "psk": "key0" 00:18:31.770 } 00:18:31.770 }, 00:18:31.770 { 00:18:31.770 "method": "nvmf_subsystem_add_ns", 00:18:31.770 "params": { 00:18:31.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.770 "namespace": { 00:18:31.770 "nsid": 1, 00:18:31.770 "bdev_name": "malloc0", 00:18:31.770 "nguid": "06089D0C76A847D49BCF2F2215EB19CF", 00:18:31.770 "uuid": "06089d0c-76a8-47d4-9bcf-2f2215eb19cf", 00:18:31.770 "no_auto_visible": false 00:18:31.770 } 00:18:31.770 } 00:18:31.770 }, 00:18:31.770 { 00:18:31.770 "method": "nvmf_subsystem_add_listener", 00:18:31.770 "params": { 00:18:31.770 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.770 "listen_address": { 00:18:31.770 "trtype": "TCP", 00:18:31.770 "adrfam": "IPv4", 00:18:31.770 "traddr": "10.0.0.2", 00:18:31.770 "trsvcid": "4420" 00:18:31.770 }, 00:18:31.770 "secure_channel": false, 00:18:31.770 "sock_impl": "ssl" 00:18:31.770 } 00:18:31.770 } 00:18:31.770 ] 00:18:31.770 } 00:18:31.770 ] 00:18:31.770 }' 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=942463 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 942463 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 942463 ']' 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.770 14:20:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.770 [2024-07-25 14:20:01.242961] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:31.770 [2024-07-25 14:20:01.243052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.770 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.770 [2024-07-25 14:20:01.307395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.770 [2024-07-25 14:20:01.415416] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.770 [2024-07-25 14:20:01.415474] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.770 [2024-07-25 14:20:01.415487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.770 [2024-07-25 14:20:01.415498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.770 [2024-07-25 14:20:01.415508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.770 [2024-07-25 14:20:01.415578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.027 [2024-07-25 14:20:01.652390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.286 [2024-07-25 14:20:01.694627] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:32.286 [2024-07-25 14:20:01.694853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.853 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.853 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.853 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:32.853 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=942614 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 942614 /var/tmp/bdevperf.sock 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 942614 ']' 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.854 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:32.854 "subsystems": [ 00:18:32.854 { 00:18:32.854 "subsystem": "keyring", 00:18:32.854 "config": [ 00:18:32.854 { 00:18:32.854 "method": "keyring_file_add_key", 00:18:32.854 "params": { 00:18:32.854 "name": "key0", 00:18:32.854 "path": "/tmp/tmp.3vuAI12gxd" 00:18:32.854 } 00:18:32.854 } 00:18:32.854 ] 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "subsystem": "iobuf", 00:18:32.854 "config": [ 00:18:32.854 { 00:18:32.854 "method": "iobuf_set_options", 00:18:32.854 "params": { 00:18:32.854 "small_pool_count": 8192, 00:18:32.854 "large_pool_count": 1024, 00:18:32.854 "small_bufsize": 8192, 00:18:32.854 "large_bufsize": 135168 00:18:32.854 } 00:18:32.854 } 00:18:32.854 ] 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "subsystem": "sock", 00:18:32.854 "config": [ 00:18:32.854 { 00:18:32.854 "method": "sock_set_default_impl", 00:18:32.854 "params": { 00:18:32.854 "impl_name": "posix" 00:18:32.854 } 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "method": "sock_impl_set_options", 00:18:32.854 "params": { 00:18:32.854 "impl_name": "ssl", 00:18:32.854 "recv_buf_size": 4096, 00:18:32.854 "send_buf_size": 4096, 00:18:32.854 "enable_recv_pipe": true, 00:18:32.854 "enable_quickack": false, 00:18:32.854 "enable_placement_id": 0, 00:18:32.854 "enable_zerocopy_send_server": true, 00:18:32.854 "enable_zerocopy_send_client": false, 00:18:32.854 "zerocopy_threshold": 0, 00:18:32.854 "tls_version": 0, 00:18:32.854 "enable_ktls": false 00:18:32.854 } 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "method": "sock_impl_set_options", 00:18:32.854 "params": { 00:18:32.854 "impl_name": "posix", 00:18:32.854 "recv_buf_size": 2097152, 00:18:32.854 "send_buf_size": 2097152, 00:18:32.854 "enable_recv_pipe": true, 00:18:32.854 "enable_quickack": false, 00:18:32.854 "enable_placement_id": 0, 00:18:32.854 "enable_zerocopy_send_server": true, 00:18:32.854 "enable_zerocopy_send_client": false, 00:18:32.854 "zerocopy_threshold": 0, 00:18:32.854 "tls_version": 0, 00:18:32.854 "enable_ktls": false 00:18:32.854 } 00:18:32.854 } 00:18:32.854 ] 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "subsystem": "vmd", 00:18:32.854 "config": [] 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "subsystem": "accel", 00:18:32.854 "config": [ 00:18:32.854 { 00:18:32.854 "method": "accel_set_options", 00:18:32.854 "params": { 00:18:32.854 "small_cache_size": 128, 00:18:32.854 "large_cache_size": 16, 00:18:32.854 "task_count": 2048, 00:18:32.854 "sequence_count": 2048, 00:18:32.854 "buf_count": 2048 00:18:32.854 } 00:18:32.854 } 00:18:32.854 ] 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "subsystem": "bdev", 00:18:32.854 "config": [ 00:18:32.854 { 00:18:32.854 "method": "bdev_set_options", 00:18:32.854 "params": { 00:18:32.854 "bdev_io_pool_size": 65535, 00:18:32.854 "bdev_io_cache_size": 256, 00:18:32.854 "bdev_auto_examine": true, 00:18:32.854 "iobuf_small_cache_size": 128, 00:18:32.854 "iobuf_large_cache_size": 16 00:18:32.854 } 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "method": "bdev_raid_set_options", 00:18:32.854 "params": { 00:18:32.854 "process_window_size_kb": 1024, 00:18:32.854 "process_max_bandwidth_mb_sec": 0 00:18:32.854 } 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "method": "bdev_iscsi_set_options", 00:18:32.854 "params": { 00:18:32.854 "timeout_sec": 30 00:18:32.854 } 00:18:32.854 }, 00:18:32.854 { 00:18:32.854 "method": "bdev_nvme_set_options", 00:18:32.854 "params": { 00:18:32.854 "action_on_timeout": "none", 00:18:32.855 "timeout_us": 0, 00:18:32.855 "timeout_admin_us": 0, 00:18:32.855 "keep_alive_timeout_ms": 10000, 00:18:32.855 "arbitration_burst": 0, 00:18:32.855 "low_priority_weight": 0, 00:18:32.855 "medium_priority_weight": 0, 00:18:32.855 "high_priority_weight": 0, 00:18:32.855 "nvme_adminq_poll_period_us": 10000, 00:18:32.855 "nvme_ioq_poll_period_us": 0, 00:18:32.855 "io_queue_requests": 512, 00:18:32.855 "delay_cmd_submit": true, 00:18:32.855 "transport_retry_count": 4, 00:18:32.855 "bdev_retry_count": 3, 00:18:32.855 "transport_ack_timeout": 0, 00:18:32.855 "ctrlr_loss_timeout_sec": 0, 00:18:32.855 "reconnect_delay_sec": 0, 00:18:32.855 "fast_io_fail_timeout_sec": 0, 00:18:32.855 "disable_auto_failback": false, 00:18:32.855 "generate_uuids": false, 00:18:32.855 "transport_tos": 0, 00:18:32.855 "nvme_error_stat": false, 00:18:32.855 "rdma_srq_size": 0, 00:18:32.855 "io_path_stat": false, 00:18:32.855 "allow_accel_sequence": false, 00:18:32.855 "rdma_max_cq_size": 0, 00:18:32.855 "rdma_cm_event_timeout_ms": 0, 00:18:32.855 "dhchap_digests": [ 00:18:32.855 "sha256", 00:18:32.855 "sha384", 00:18:32.855 "sha512" 00:18:32.855 ], 00:18:32.855 "dhchap_dhgroups": [ 00:18:32.855 "null", 00:18:32.855 "ffdhe2048", 00:18:32.855 "ffdhe3072", 00:18:32.855 "ffdhe4096", 00:18:32.855 "ffdhe6144", 00:18:32.855 "ffdhe8192" 00:18:32.855 ] 00:18:32.855 } 00:18:32.855 }, 00:18:32.855 { 00:18:32.855 "method": "bdev_nvme_attach_controller", 00:18:32.855 "params": { 00:18:32.855 "name": "nvme0", 00:18:32.855 "trtype": "TCP", 00:18:32.855 "adrfam": "IPv4", 00:18:32.855 "traddr": "10.0.0.2", 00:18:32.855 "trsvcid": "4420", 00:18:32.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:32.855 "prchk_reftag": false, 00:18:32.855 "prchk_guard": false, 00:18:32.855 "ctrlr_loss_timeout_sec": 0, 00:18:32.855 "reconnect_delay_sec": 0, 00:18:32.855 "fast_io_fail_timeout_sec": 0, 00:18:32.855 "psk": "key0", 00:18:32.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:32.855 "hdgst": false, 00:18:32.855 "ddgst": false 00:18:32.855 } 00:18:32.855 }, 00:18:32.855 { 00:18:32.855 "method": "bdev_nvme_set_hotplug", 00:18:32.855 "params": { 00:18:32.855 "period_us": 100000, 00:18:32.855 "enable": false 00:18:32.855 } 00:18:32.855 }, 00:18:32.855 { 00:18:32.855 "method": "bdev_enable_histogram", 00:18:32.855 "params": { 00:18:32.855 "name": "nvme0n1", 00:18:32.855 "enable": true 00:18:32.855 } 00:18:32.855 }, 00:18:32.855 { 00:18:32.855 "method": "bdev_wait_for_examine" 00:18:32.855 } 00:18:32.855 ] 00:18:32.855 }, 00:18:32.855 { 00:18:32.855 "subsystem": "nbd", 00:18:32.855 "config": [] 00:18:32.855 } 00:18:32.855 ] 00:18:32.855 }' 00:18:32.855 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:32.855 14:20:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.855 [2024-07-25 14:20:02.306884] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:32.855 [2024-07-25 14:20:02.306969] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942614 ] 00:18:32.855 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.855 [2024-07-25 14:20:02.364381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.855 [2024-07-25 14:20:02.468729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.116 [2024-07-25 14:20:02.642097] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.683 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.683 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:33.683 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:33.683 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:33.940 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.940 14:20:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.195 Running I/O for 1 seconds... 00:18:35.132 00:18:35.132 Latency(us) 00:18:35.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.132 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:35.132 Verification LBA range: start 0x0 length 0x2000 00:18:35.132 nvme0n1 : 1.02 3495.76 13.66 0.00 0.00 36284.15 6043.88 36117.62 00:18:35.132 =================================================================================================================== 00:18:35.132 Total : 3495.76 13.66 0.00 0.00 36284.15 6043.88 36117.62 00:18:35.132 0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:35.132 nvmf_trace.0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 942614 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 942614 ']' 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 942614 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942614 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942614' 00:18:35.132 killing process with pid 942614 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 942614 00:18:35.132 Received shutdown signal, test time was about 1.000000 seconds 00:18:35.132 00:18:35.132 Latency(us) 00:18:35.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:35.132 =================================================================================================================== 00:18:35.132 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:35.132 14:20:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 942614 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.391 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.391 rmmod nvme_tcp 00:18:35.652 rmmod nvme_fabrics 00:18:35.652 rmmod nvme_keyring 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 942463 ']' 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 942463 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 942463 ']' 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 942463 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942463 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942463' 00:18:35.652 killing process with pid 942463 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 942463 00:18:35.652 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 942463 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.911 14:20:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.815 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.815 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EKMYdlzHjT /tmp/tmp.kXIxIDMaKp /tmp/tmp.3vuAI12gxd 00:18:37.815 00:18:37.815 real 1m19.856s 00:18:37.815 user 2m10.170s 00:18:37.815 sys 0m24.438s 00:18:37.815 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.815 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.815 ************************************ 00:18:37.815 END TEST nvmf_tls 00:18:37.815 ************************************ 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.074 ************************************ 00:18:38.074 START TEST nvmf_fips 00:18:38.074 ************************************ 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:38.074 * Looking for test storage... 00:18:38.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.074 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:38.075 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:38.075 Error setting digest 00:18:38.076 009245C6DF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:38.076 009245C6DF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.076 14:20:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.607 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:40.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:40.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:40.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:40.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:18:40.608 00:18:40.608 --- 10.0.0.2 ping statistics --- 00:18:40.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.608 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:18:40.608 00:18:40.608 --- 10.0.0.1 ping statistics --- 00:18:40.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.608 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=944971 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 944971 00:18:40.608 14:20:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 944971 ']' 00:18:40.608 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.608 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.609 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.609 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.609 14:20:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:40.609 [2024-07-25 14:20:10.077280] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:40.609 [2024-07-25 14:20:10.077396] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.609 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.609 [2024-07-25 14:20:10.141736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.609 [2024-07-25 14:20:10.254093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.609 [2024-07-25 14:20:10.254167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.609 [2024-07-25 14:20:10.254182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.609 [2024-07-25 14:20:10.254204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.609 [2024-07-25 14:20:10.254214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.609 [2024-07-25 14:20:10.254256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:41.542 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.800 [2024-07-25 14:20:11.317510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:41.800 [2024-07-25 14:20:11.333521] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:41.800 [2024-07-25 14:20:11.333720] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:41.800 [2024-07-25 14:20:11.364680] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:41.800 malloc0 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=945132 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 945132 /var/tmp/bdevperf.sock 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 945132 ']' 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:41.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:41.800 14:20:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:42.058 [2024-07-25 14:20:11.457498] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:18:42.058 [2024-07-25 14:20:11.457590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid945132 ] 00:18:42.058 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.058 [2024-07-25 14:20:11.515459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.058 [2024-07-25 14:20:11.619038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.991 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:42.991 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:42.991 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:43.249 [2024-07-25 14:20:12.696203] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.249 [2024-07-25 14:20:12.696328] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:43.249 TLSTESTn1 00:18:43.249 14:20:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:43.507 Running I/O for 10 seconds... 00:18:53.468 00:18:53.468 Latency(us) 00:18:53.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.468 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:53.468 Verification LBA range: start 0x0 length 0x2000 00:18:53.468 TLSTESTn1 : 10.03 3025.12 11.82 0.00 0.00 42222.28 11602.30 32234.00 00:18:53.468 =================================================================================================================== 00:18:53.468 Total : 3025.12 11.82 0.00 0.00 42222.28 11602.30 32234.00 00:18:53.468 0 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:53.468 14:20:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:53.468 nvmf_trace.0 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 945132 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 945132 ']' 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 945132 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 945132 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 945132' 00:18:53.468 killing process with pid 945132 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 945132 00:18:53.468 Received shutdown signal, test time was about 10.000000 seconds 00:18:53.468 00:18:53.468 Latency(us) 00:18:53.468 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.468 =================================================================================================================== 00:18:53.468 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:53.468 [2024-07-25 14:20:23.100460] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:53.468 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 945132 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:53.725 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:53.725 rmmod nvme_tcp 00:18:53.982 rmmod nvme_fabrics 00:18:53.982 rmmod nvme_keyring 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 944971 ']' 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 944971 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 944971 ']' 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 944971 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944971 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944971' 00:18:53.982 killing process with pid 944971 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 944971 00:18:53.982 [2024-07-25 14:20:23.461751] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:53.982 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 944971 00:18:54.240 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:54.240 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:54.240 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:54.240 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:54.240 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:54.241 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:54.241 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:54.241 14:20:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.142 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:56.142 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:56.142 00:18:56.142 real 0m18.281s 00:18:56.142 user 0m21.048s 00:18:56.142 sys 0m6.807s 00:18:56.142 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:56.142 14:20:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.142 ************************************ 00:18:56.142 END TEST nvmf_fips 00:18:56.142 ************************************ 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:56.400 14:20:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:58.929 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:58.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:58.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:58.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:58.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.930 14:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:58.930 ************************************ 00:18:58.930 START TEST nvmf_perf_adq 00:18:58.930 ************************************ 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:58.930 * Looking for test storage... 00:18:58.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:58.930 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:58.931 14:20:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:00.831 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:00.832 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:00.832 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:00.832 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:00.832 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:00.832 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:01.400 14:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:03.361 14:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.634 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:08.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:08.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:08.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:08.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:19:08.635 00:19:08.635 --- 10.0.0.2 ping statistics --- 00:19:08.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.635 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:19:08.635 00:19:08.635 --- 10.0.0.1 ping statistics --- 00:19:08.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.635 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=951016 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 951016 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 951016 ']' 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.635 14:20:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.635 [2024-07-25 14:20:37.960026] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:08.635 [2024-07-25 14:20:37.960122] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.635 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.636 [2024-07-25 14:20:38.022436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.636 [2024-07-25 14:20:38.130588] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.636 [2024-07-25 14:20:38.130646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.636 [2024-07-25 14:20:38.130659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.636 [2024-07-25 14:20:38.130671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.636 [2024-07-25 14:20:38.130680] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.636 [2024-07-25 14:20:38.130764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.636 [2024-07-25 14:20:38.130827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.636 [2024-07-25 14:20:38.130895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.636 [2024-07-25 14:20:38.130898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.636 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 [2024-07-25 14:20:38.350959] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 Malloc1 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.896 [2024-07-25 14:20:38.404072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=951164 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:08.896 14:20:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:08.896 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:10.800 "tick_rate": 2700000000, 00:19:10.800 "poll_groups": [ 00:19:10.800 { 00:19:10.800 "name": "nvmf_tgt_poll_group_000", 00:19:10.800 "admin_qpairs": 1, 00:19:10.800 "io_qpairs": 1, 00:19:10.800 "current_admin_qpairs": 1, 00:19:10.800 "current_io_qpairs": 1, 00:19:10.800 "pending_bdev_io": 0, 00:19:10.800 "completed_nvme_io": 19191, 00:19:10.800 "transports": [ 00:19:10.800 { 00:19:10.800 "trtype": "TCP" 00:19:10.800 } 00:19:10.800 ] 00:19:10.800 }, 00:19:10.800 { 00:19:10.800 "name": "nvmf_tgt_poll_group_001", 00:19:10.800 "admin_qpairs": 0, 00:19:10.800 "io_qpairs": 1, 00:19:10.800 "current_admin_qpairs": 0, 00:19:10.800 "current_io_qpairs": 1, 00:19:10.800 "pending_bdev_io": 0, 00:19:10.800 "completed_nvme_io": 20838, 00:19:10.800 "transports": [ 00:19:10.800 { 00:19:10.800 "trtype": "TCP" 00:19:10.800 } 00:19:10.800 ] 00:19:10.800 }, 00:19:10.800 { 00:19:10.800 "name": "nvmf_tgt_poll_group_002", 00:19:10.800 "admin_qpairs": 0, 00:19:10.800 "io_qpairs": 1, 00:19:10.800 "current_admin_qpairs": 0, 00:19:10.800 "current_io_qpairs": 1, 00:19:10.800 "pending_bdev_io": 0, 00:19:10.800 "completed_nvme_io": 20584, 00:19:10.800 "transports": [ 00:19:10.800 { 00:19:10.800 "trtype": "TCP" 00:19:10.800 } 00:19:10.800 ] 00:19:10.800 }, 00:19:10.800 { 00:19:10.800 "name": "nvmf_tgt_poll_group_003", 00:19:10.800 "admin_qpairs": 0, 00:19:10.800 "io_qpairs": 1, 00:19:10.800 "current_admin_qpairs": 0, 00:19:10.800 "current_io_qpairs": 1, 00:19:10.800 "pending_bdev_io": 0, 00:19:10.800 "completed_nvme_io": 20254, 00:19:10.800 "transports": [ 00:19:10.800 { 00:19:10.800 "trtype": "TCP" 00:19:10.800 } 00:19:10.800 ] 00:19:10.800 } 00:19:10.800 ] 00:19:10.800 }' 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:10.800 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:11.058 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:11.058 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:11.058 14:20:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 951164 00:19:19.213 Initializing NVMe Controllers 00:19:19.213 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:19.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:19.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:19.213 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:19.213 Initialization complete. Launching workers. 00:19:19.213 ======================================================== 00:19:19.213 Latency(us) 00:19:19.213 Device Information : IOPS MiB/s Average min max 00:19:19.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10639.10 41.56 6017.28 2363.16 9983.08 00:19:19.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10848.50 42.38 5899.03 1981.86 9640.24 00:19:19.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10707.40 41.83 5977.98 1557.40 9721.23 00:19:19.213 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10142.90 39.62 6309.98 1340.98 10642.75 00:19:19.213 ======================================================== 00:19:19.213 Total : 42337.89 165.38 6047.16 1340.98 10642.75 00:19:19.213 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.213 rmmod nvme_tcp 00:19:19.213 rmmod nvme_fabrics 00:19:19.213 rmmod nvme_keyring 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 951016 ']' 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 951016 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 951016 ']' 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 951016 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 951016 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 951016' 00:19:19.213 killing process with pid 951016 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 951016 00:19:19.213 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 951016 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.474 14:20:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.377 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.377 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:21.377 14:20:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:21.942 14:20:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:24.479 14:20:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.760 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.761 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.761 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:19:29.761 00:19:29.761 --- 10.0.0.2 ping statistics --- 00:19:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.761 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:19:29.761 00:19:29.761 --- 10.0.0.1 ping statistics --- 00:19:29.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.761 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:29.761 net.core.busy_poll = 1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:29.761 net.core.busy_read = 1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=953717 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 953717 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 953717 ']' 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.761 14:20:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.761 [2024-07-25 14:20:58.902599] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:29.762 [2024-07-25 14:20:58.902696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.762 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.762 [2024-07-25 14:20:58.971756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.762 [2024-07-25 14:20:59.081651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.762 [2024-07-25 14:20:59.081713] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.762 [2024-07-25 14:20:59.081741] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.762 [2024-07-25 14:20:59.081752] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.762 [2024-07-25 14:20:59.081762] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.762 [2024-07-25 14:20:59.081812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.762 [2024-07-25 14:20:59.082269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.762 [2024-07-25 14:20:59.082297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.762 [2024-07-25 14:20:59.086081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:30.327 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.328 14:20:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 [2024-07-25 14:21:00.036841] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 Malloc1 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:30.611 [2024-07-25 14:21:00.088377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=953959 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:30.611 14:21:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:30.611 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:32.527 "tick_rate": 2700000000, 00:19:32.527 "poll_groups": [ 00:19:32.527 { 00:19:32.527 "name": "nvmf_tgt_poll_group_000", 00:19:32.527 "admin_qpairs": 1, 00:19:32.527 "io_qpairs": 1, 00:19:32.527 "current_admin_qpairs": 1, 00:19:32.527 "current_io_qpairs": 1, 00:19:32.527 "pending_bdev_io": 0, 00:19:32.527 "completed_nvme_io": 25095, 00:19:32.527 "transports": [ 00:19:32.527 { 00:19:32.527 "trtype": "TCP" 00:19:32.527 } 00:19:32.527 ] 00:19:32.527 }, 00:19:32.527 { 00:19:32.527 "name": "nvmf_tgt_poll_group_001", 00:19:32.527 "admin_qpairs": 0, 00:19:32.527 "io_qpairs": 3, 00:19:32.527 "current_admin_qpairs": 0, 00:19:32.527 "current_io_qpairs": 3, 00:19:32.527 "pending_bdev_io": 0, 00:19:32.527 "completed_nvme_io": 27097, 00:19:32.527 "transports": [ 00:19:32.527 { 00:19:32.527 "trtype": "TCP" 00:19:32.527 } 00:19:32.527 ] 00:19:32.527 }, 00:19:32.527 { 00:19:32.527 "name": "nvmf_tgt_poll_group_002", 00:19:32.527 "admin_qpairs": 0, 00:19:32.527 "io_qpairs": 0, 00:19:32.527 "current_admin_qpairs": 0, 00:19:32.527 "current_io_qpairs": 0, 00:19:32.527 "pending_bdev_io": 0, 00:19:32.527 "completed_nvme_io": 0, 00:19:32.527 "transports": [ 00:19:32.527 { 00:19:32.527 "trtype": "TCP" 00:19:32.527 } 00:19:32.527 ] 00:19:32.527 }, 00:19:32.527 { 00:19:32.527 "name": "nvmf_tgt_poll_group_003", 00:19:32.527 "admin_qpairs": 0, 00:19:32.527 "io_qpairs": 0, 00:19:32.527 "current_admin_qpairs": 0, 00:19:32.527 "current_io_qpairs": 0, 00:19:32.527 "pending_bdev_io": 0, 00:19:32.527 "completed_nvme_io": 0, 00:19:32.527 "transports": [ 00:19:32.527 { 00:19:32.527 "trtype": "TCP" 00:19:32.527 } 00:19:32.527 ] 00:19:32.527 } 00:19:32.527 ] 00:19:32.527 }' 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:32.527 14:21:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 953959 00:19:40.638 Initializing NVMe Controllers 00:19:40.638 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:40.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:40.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:40.638 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:40.638 Initialization complete. Launching workers. 00:19:40.638 ======================================================== 00:19:40.638 Latency(us) 00:19:40.638 Device Information : IOPS MiB/s Average min max 00:19:40.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13602.90 53.14 4704.91 1368.21 46513.87 00:19:40.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4613.80 18.02 13876.66 2213.03 60118.19 00:19:40.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4893.90 19.12 13082.72 2110.31 61123.93 00:19:40.638 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4631.70 18.09 13819.80 1677.76 61591.65 00:19:40.638 ======================================================== 00:19:40.638 Total : 27742.30 108.37 9229.92 1368.21 61591.65 00:19:40.638 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.638 rmmod nvme_tcp 00:19:40.638 rmmod nvme_fabrics 00:19:40.638 rmmod nvme_keyring 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 953717 ']' 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 953717 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 953717 ']' 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 953717 00:19:40.638 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 953717 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 953717' 00:19:40.897 killing process with pid 953717 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 953717 00:19:40.897 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 953717 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.157 14:21:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:44.447 00:19:44.447 real 0m45.671s 00:19:44.447 user 2m40.878s 00:19:44.447 sys 0m10.238s 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.447 ************************************ 00:19:44.447 END TEST nvmf_perf_adq 00:19:44.447 ************************************ 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.447 ************************************ 00:19:44.447 START TEST nvmf_shutdown 00:19:44.447 ************************************ 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:44.447 * Looking for test storage... 00:19:44.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.447 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:44.448 ************************************ 00:19:44.448 START TEST nvmf_shutdown_tc1 00:19:44.448 ************************************ 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.448 14:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:46.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.354 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:46.355 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:46.355 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:46.355 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:46.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:19:46.355 00:19:46.355 --- 10.0.0.2 ping statistics --- 00:19:46.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.355 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:46.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:19:46.355 00:19:46.355 --- 10.0.0.1 ping statistics --- 00:19:46.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.355 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=957853 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 957853 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 957853 ']' 00:19:46.355 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.356 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.356 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.356 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.356 14:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.616 [2024-07-25 14:21:16.036878] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:46.616 [2024-07-25 14:21:16.036957] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.616 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.616 [2024-07-25 14:21:16.100181] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.616 [2024-07-25 14:21:16.201821] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.616 [2024-07-25 14:21:16.201877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.616 [2024-07-25 14:21:16.201905] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.616 [2024-07-25 14:21:16.201916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.616 [2024-07-25 14:21:16.201926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.616 [2024-07-25 14:21:16.202010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.616 [2024-07-25 14:21:16.202083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.616 [2024-07-25 14:21:16.202430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:46.616 [2024-07-25 14:21:16.202435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.875 [2024-07-25 14:21:16.358662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.875 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.875 Malloc1 00:19:46.875 [2024-07-25 14:21:16.433554] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.875 Malloc2 00:19:46.875 Malloc3 00:19:47.133 Malloc4 00:19:47.133 Malloc5 00:19:47.133 Malloc6 00:19:47.133 Malloc7 00:19:47.133 Malloc8 00:19:47.392 Malloc9 00:19:47.392 Malloc10 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=957913 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 957913 /var/tmp/bdevperf.sock 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 957913 ']' 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:47.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.392 "trtype": "$TEST_TRANSPORT", 00:19:47.392 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.392 "adrfam": "ipv4", 00:19:47.392 "trsvcid": "$NVMF_PORT", 00:19:47.392 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.392 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.392 "hdgst": ${hdgst:-false}, 00:19:47.392 "ddgst": ${ddgst:-false} 00:19:47.392 }, 00:19:47.392 "method": "bdev_nvme_attach_controller" 00:19:47.392 } 00:19:47.392 EOF 00:19:47.392 )") 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.392 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.392 { 00:19:47.392 "params": { 00:19:47.392 "name": "Nvme$subsystem", 00:19:47.393 "trtype": "$TEST_TRANSPORT", 00:19:47.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "$NVMF_PORT", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.393 "hdgst": ${hdgst:-false}, 00:19:47.393 "ddgst": ${ddgst:-false} 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 } 00:19:47.393 EOF 00:19:47.393 )") 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.393 { 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme$subsystem", 00:19:47.393 "trtype": "$TEST_TRANSPORT", 00:19:47.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "$NVMF_PORT", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.393 "hdgst": ${hdgst:-false}, 00:19:47.393 "ddgst": ${ddgst:-false} 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 } 00:19:47.393 EOF 00:19:47.393 )") 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:47.393 { 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme$subsystem", 00:19:47.393 "trtype": "$TEST_TRANSPORT", 00:19:47.393 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "$NVMF_PORT", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:47.393 "hdgst": ${hdgst:-false}, 00:19:47.393 "ddgst": ${ddgst:-false} 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 } 00:19:47.393 EOF 00:19:47.393 )") 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:47.393 14:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme1", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme2", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme3", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme4", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme5", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme6", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme7", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme8", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme9", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 },{ 00:19:47.393 "params": { 00:19:47.393 "name": "Nvme10", 00:19:47.393 "trtype": "tcp", 00:19:47.393 "traddr": "10.0.0.2", 00:19:47.393 "adrfam": "ipv4", 00:19:47.393 "trsvcid": "4420", 00:19:47.393 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:47.393 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:47.393 "hdgst": false, 00:19:47.393 "ddgst": false 00:19:47.393 }, 00:19:47.393 "method": "bdev_nvme_attach_controller" 00:19:47.393 }' 00:19:47.393 [2024-07-25 14:21:16.926871] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:47.393 [2024-07-25 14:21:16.926943] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:47.393 EAL: No free 2048 kB hugepages reported on node 1 00:19:47.393 [2024-07-25 14:21:16.991613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.653 [2024-07-25 14:21:17.103189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 957913 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:49.557 14:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:50.492 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 957913 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 957853 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.492 { 00:19:50.492 "params": { 00:19:50.492 "name": "Nvme$subsystem", 00:19:50.492 "trtype": "$TEST_TRANSPORT", 00:19:50.492 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.492 "adrfam": "ipv4", 00:19:50.492 "trsvcid": "$NVMF_PORT", 00:19:50.492 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.492 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.492 "hdgst": ${hdgst:-false}, 00:19:50.492 "ddgst": ${ddgst:-false} 00:19:50.492 }, 00:19:50.492 "method": "bdev_nvme_attach_controller" 00:19:50.492 } 00:19:50.492 EOF 00:19:50.492 )") 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.492 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.492 { 00:19:50.492 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.493 { 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme$subsystem", 00:19:50.493 "trtype": "$TEST_TRANSPORT", 00:19:50.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "$NVMF_PORT", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.493 "hdgst": ${hdgst:-false}, 00:19:50.493 "ddgst": ${ddgst:-false} 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 } 00:19:50.493 EOF 00:19:50.493 )") 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:50.493 14:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme1", 00:19:50.493 "trtype": "tcp", 00:19:50.493 "traddr": "10.0.0.2", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "4420", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.493 "hdgst": false, 00:19:50.493 "ddgst": false 00:19:50.493 }, 00:19:50.493 "method": "bdev_nvme_attach_controller" 00:19:50.493 },{ 00:19:50.493 "params": { 00:19:50.493 "name": "Nvme2", 00:19:50.493 "trtype": "tcp", 00:19:50.493 "traddr": "10.0.0.2", 00:19:50.493 "adrfam": "ipv4", 00:19:50.493 "trsvcid": "4420", 00:19:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.493 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.493 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme3", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme4", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme5", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme6", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme7", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme8", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme9", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 },{ 00:19:50.494 "params": { 00:19:50.494 "name": "Nvme10", 00:19:50.494 "trtype": "tcp", 00:19:50.494 "traddr": "10.0.0.2", 00:19:50.494 "adrfam": "ipv4", 00:19:50.494 "trsvcid": "4420", 00:19:50.494 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:50.494 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:50.494 "hdgst": false, 00:19:50.494 "ddgst": false 00:19:50.494 }, 00:19:50.494 "method": "bdev_nvme_attach_controller" 00:19:50.494 }' 00:19:50.494 [2024-07-25 14:21:19.992486] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:50.494 [2024-07-25 14:21:19.992560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958335 ] 00:19:50.494 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.494 [2024-07-25 14:21:20.059368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.753 [2024-07-25 14:21:20.171488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.131 Running I/O for 1 seconds... 00:19:53.066 00:19:53.066 Latency(us) 00:19:53.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.066 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme1n1 : 1.16 220.80 13.80 0.00 0.00 287154.44 20194.80 256318.58 00:19:53.066 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme2n1 : 1.10 233.41 14.59 0.00 0.00 265781.10 18058.81 262532.36 00:19:53.066 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme3n1 : 1.07 238.35 14.90 0.00 0.00 255811.70 22913.33 245444.46 00:19:53.066 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme4n1 : 1.16 274.83 17.18 0.00 0.00 218061.63 18447.17 237677.23 00:19:53.066 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme5n1 : 1.17 222.96 13.94 0.00 0.00 265257.43 8738.13 257872.02 00:19:53.066 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme6n1 : 1.15 221.70 13.86 0.00 0.00 263152.45 21165.70 257872.02 00:19:53.066 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme7n1 : 1.18 271.00 16.94 0.00 0.00 211577.29 6602.15 239230.67 00:19:53.066 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme8n1 : 1.18 270.27 16.89 0.00 0.00 208044.18 10194.49 256318.58 00:19:53.066 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme9n1 : 1.18 217.53 13.60 0.00 0.00 255431.11 24272.59 287387.50 00:19:53.066 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:53.066 Verification LBA range: start 0x0 length 0x400 00:19:53.066 Nvme10n1 : 1.17 218.41 13.65 0.00 0.00 249843.48 29709.65 262532.36 00:19:53.066 =================================================================================================================== 00:19:53.066 Total : 2389.28 149.33 0.00 0.00 245566.81 6602.15 287387.50 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:53.324 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:53.581 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:53.581 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:53.581 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:53.581 14:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:53.581 rmmod nvme_tcp 00:19:53.581 rmmod nvme_fabrics 00:19:53.581 rmmod nvme_keyring 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 957853 ']' 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 957853 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 957853 ']' 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 957853 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 957853 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 957853' 00:19:53.581 killing process with pid 957853 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 957853 00:19:53.581 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 957853 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.149 14:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.081 00:19:56.081 real 0m11.779s 00:19:56.081 user 0m34.063s 00:19:56.081 sys 0m3.133s 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:56.081 ************************************ 00:19:56.081 END TEST nvmf_shutdown_tc1 00:19:56.081 ************************************ 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:56.081 ************************************ 00:19:56.081 START TEST nvmf_shutdown_tc2 00:19:56.081 ************************************ 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.081 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:56.082 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:56.082 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:56.082 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:56.082 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.082 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:56.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:56.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:19:56.343 00:19:56.343 --- 10.0.0.2 ping statistics --- 00:19:56.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.343 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:56.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:56.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:19:56.343 00:19:56.343 --- 10.0.0.1 ping statistics --- 00:19:56.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:56.343 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:19:56.343 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=959095 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 959095 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 959095 ']' 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:56.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.344 14:21:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.344 [2024-07-25 14:21:25.896170] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:56.344 [2024-07-25 14:21:25.896241] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:56.344 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.344 [2024-07-25 14:21:25.962935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:56.603 [2024-07-25 14:21:26.076052] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:56.603 [2024-07-25 14:21:26.076151] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:56.603 [2024-07-25 14:21:26.076165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:56.603 [2024-07-25 14:21:26.076176] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:56.603 [2024-07-25 14:21:26.076185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:56.603 [2024-07-25 14:21:26.076259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.603 [2024-07-25 14:21:26.076329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:56.603 [2024-07-25 14:21:26.076658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:56.603 [2024-07-25 14:21:26.076662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.603 [2024-07-25 14:21:26.231483] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.603 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.863 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.863 Malloc1 00:19:56.863 [2024-07-25 14:21:26.320857] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.863 Malloc2 00:19:56.863 Malloc3 00:19:56.863 Malloc4 00:19:56.863 Malloc5 00:19:57.122 Malloc6 00:19:57.122 Malloc7 00:19:57.122 Malloc8 00:19:57.122 Malloc9 00:19:57.122 Malloc10 00:19:57.122 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.122 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:57.122 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:57.122 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=959273 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 959273 /var/tmp/bdevperf.sock 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 959273 ']' 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.381 { 00:19:57.381 "params": { 00:19:57.381 "name": "Nvme$subsystem", 00:19:57.381 "trtype": "$TEST_TRANSPORT", 00:19:57.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.381 "adrfam": "ipv4", 00:19:57.381 "trsvcid": "$NVMF_PORT", 00:19:57.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.381 "hdgst": ${hdgst:-false}, 00:19:57.381 "ddgst": ${ddgst:-false} 00:19:57.381 }, 00:19:57.381 "method": "bdev_nvme_attach_controller" 00:19:57.381 } 00:19:57.381 EOF 00:19:57.381 )") 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.381 { 00:19:57.381 "params": { 00:19:57.381 "name": "Nvme$subsystem", 00:19:57.381 "trtype": "$TEST_TRANSPORT", 00:19:57.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.381 "adrfam": "ipv4", 00:19:57.381 "trsvcid": "$NVMF_PORT", 00:19:57.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.381 "hdgst": ${hdgst:-false}, 00:19:57.381 "ddgst": ${ddgst:-false} 00:19:57.381 }, 00:19:57.381 "method": "bdev_nvme_attach_controller" 00:19:57.381 } 00:19:57.381 EOF 00:19:57.381 )") 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.381 { 00:19:57.381 "params": { 00:19:57.381 "name": "Nvme$subsystem", 00:19:57.381 "trtype": "$TEST_TRANSPORT", 00:19:57.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.381 "adrfam": "ipv4", 00:19:57.381 "trsvcid": "$NVMF_PORT", 00:19:57.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.381 "hdgst": ${hdgst:-false}, 00:19:57.381 "ddgst": ${ddgst:-false} 00:19:57.381 }, 00:19:57.381 "method": "bdev_nvme_attach_controller" 00:19:57.381 } 00:19:57.381 EOF 00:19:57.381 )") 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.381 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.381 { 00:19:57.381 "params": { 00:19:57.381 "name": "Nvme$subsystem", 00:19:57.381 "trtype": "$TEST_TRANSPORT", 00:19:57.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.381 "adrfam": "ipv4", 00:19:57.381 "trsvcid": "$NVMF_PORT", 00:19:57.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.381 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:57.382 { 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme$subsystem", 00:19:57.382 "trtype": "$TEST_TRANSPORT", 00:19:57.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "$NVMF_PORT", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:57.382 "hdgst": ${hdgst:-false}, 00:19:57.382 "ddgst": ${ddgst:-false} 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 } 00:19:57.382 EOF 00:19:57.382 )") 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:57.382 14:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme1", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme2", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme3", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme4", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme5", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme6", 00:19:57.382 "trtype": "tcp", 00:19:57.382 "traddr": "10.0.0.2", 00:19:57.382 "adrfam": "ipv4", 00:19:57.382 "trsvcid": "4420", 00:19:57.382 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:57.382 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:57.382 "hdgst": false, 00:19:57.382 "ddgst": false 00:19:57.382 }, 00:19:57.382 "method": "bdev_nvme_attach_controller" 00:19:57.382 },{ 00:19:57.382 "params": { 00:19:57.382 "name": "Nvme7", 00:19:57.382 "trtype": "tcp", 00:19:57.383 "traddr": "10.0.0.2", 00:19:57.383 "adrfam": "ipv4", 00:19:57.383 "trsvcid": "4420", 00:19:57.383 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:57.383 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:57.383 "hdgst": false, 00:19:57.383 "ddgst": false 00:19:57.383 }, 00:19:57.383 "method": "bdev_nvme_attach_controller" 00:19:57.383 },{ 00:19:57.383 "params": { 00:19:57.383 "name": "Nvme8", 00:19:57.383 "trtype": "tcp", 00:19:57.383 "traddr": "10.0.0.2", 00:19:57.383 "adrfam": "ipv4", 00:19:57.383 "trsvcid": "4420", 00:19:57.383 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:57.383 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:57.383 "hdgst": false, 00:19:57.383 "ddgst": false 00:19:57.383 }, 00:19:57.383 "method": "bdev_nvme_attach_controller" 00:19:57.383 },{ 00:19:57.383 "params": { 00:19:57.383 "name": "Nvme9", 00:19:57.383 "trtype": "tcp", 00:19:57.383 "traddr": "10.0.0.2", 00:19:57.383 "adrfam": "ipv4", 00:19:57.383 "trsvcid": "4420", 00:19:57.383 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:57.383 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:57.383 "hdgst": false, 00:19:57.383 "ddgst": false 00:19:57.383 }, 00:19:57.383 "method": "bdev_nvme_attach_controller" 00:19:57.383 },{ 00:19:57.383 "params": { 00:19:57.383 "name": "Nvme10", 00:19:57.383 "trtype": "tcp", 00:19:57.383 "traddr": "10.0.0.2", 00:19:57.383 "adrfam": "ipv4", 00:19:57.383 "trsvcid": "4420", 00:19:57.383 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:57.383 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:57.383 "hdgst": false, 00:19:57.383 "ddgst": false 00:19:57.383 }, 00:19:57.383 "method": "bdev_nvme_attach_controller" 00:19:57.383 }' 00:19:57.383 [2024-07-25 14:21:26.827830] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:19:57.383 [2024-07-25 14:21:26.827904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959273 ] 00:19:57.383 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.383 [2024-07-25 14:21:26.891261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.383 [2024-07-25 14:21:27.001755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.287 Running I/O for 10 seconds... 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:59.287 14:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 959273 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 959273 ']' 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 959273 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 959273 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 959273' 00:19:59.545 killing process with pid 959273 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 959273 00:19:59.545 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 959273 00:19:59.804 Received shutdown signal, test time was about 0.762474 seconds 00:19:59.804 00:19:59.804 Latency(us) 00:19:59.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.804 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.804 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme1n1 : 0.74 264.64 16.54 0.00 0.00 237403.76 3203.98 226803.11 00:19:59.805 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme2n1 : 0.75 254.82 15.93 0.00 0.00 241515.58 20971.52 248551.35 00:19:59.805 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme3n1 : 0.74 267.62 16.73 0.00 0.00 222793.22 4466.16 253211.69 00:19:59.805 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme4n1 : 0.73 268.51 16.78 0.00 0.00 215863.62 5412.79 246997.90 00:19:59.805 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme5n1 : 0.76 252.38 15.77 0.00 0.00 225263.31 35923.44 234570.33 00:19:59.805 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme6n1 : 0.71 180.50 11.28 0.00 0.00 304101.83 22719.15 250104.79 00:19:59.805 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme7n1 : 0.76 252.09 15.76 0.00 0.00 213913.85 21651.15 253211.69 00:19:59.805 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme8n1 : 0.75 255.96 16.00 0.00 0.00 204410.88 21262.79 246997.90 00:19:59.805 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme9n1 : 0.72 177.44 11.09 0.00 0.00 283629.04 23204.60 274959.93 00:19:59.805 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.805 Verification LBA range: start 0x0 length 0x400 00:19:59.805 Nvme10n1 : 0.73 176.16 11.01 0.00 0.00 277316.27 22136.60 282727.16 00:19:59.805 =================================================================================================================== 00:19:59.805 Total : 2350.12 146.88 0.00 0.00 237426.99 3203.98 282727.16 00:20:00.064 14:21:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.999 rmmod nvme_tcp 00:20:00.999 rmmod nvme_fabrics 00:20:00.999 rmmod nvme_keyring 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 959095 ']' 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 959095 ']' 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 959095' 00:20:00.999 killing process with pid 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 959095 00:20:00.999 14:21:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 959095 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.567 14:21:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:04.103 00:20:04.103 real 0m7.548s 00:20:04.103 user 0m22.405s 00:20:04.103 sys 0m1.391s 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:04.103 ************************************ 00:20:04.103 END TEST nvmf_shutdown_tc2 00:20:04.103 ************************************ 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:04.103 ************************************ 00:20:04.103 START TEST nvmf_shutdown_tc3 00:20:04.103 ************************************ 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:04.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:04.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:04.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.103 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:04.104 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:04.104 00:20:04.104 --- 10.0.0.2 ping statistics --- 00:20:04.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.104 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.104 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.104 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:20:04.104 00:20:04.104 --- 10.0.0.1 ping statistics --- 00:20:04.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.104 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=960179 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 960179 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 960179 ']' 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.104 14:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.104 [2024-07-25 14:21:33.511135] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:04.104 [2024-07-25 14:21:33.511225] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.104 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.104 [2024-07-25 14:21:33.574577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.104 [2024-07-25 14:21:33.682275] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.104 [2024-07-25 14:21:33.682322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.104 [2024-07-25 14:21:33.682356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.104 [2024-07-25 14:21:33.682368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.104 [2024-07-25 14:21:33.682377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.104 [2024-07-25 14:21:33.682502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.104 [2024-07-25 14:21:33.682566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.104 [2024-07-25 14:21:33.682819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:04.104 [2024-07-25 14:21:33.682822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.043 [2024-07-25 14:21:34.513637] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.043 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.044 14:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.044 Malloc1 00:20:05.044 [2024-07-25 14:21:34.603245] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.044 Malloc2 00:20:05.044 Malloc3 00:20:05.302 Malloc4 00:20:05.302 Malloc5 00:20:05.302 Malloc6 00:20:05.302 Malloc7 00:20:05.302 Malloc8 00:20:05.564 Malloc9 00:20:05.564 Malloc10 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=960362 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 960362 /var/tmp/bdevperf.sock 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 960362 ']' 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.564 { 00:20:05.564 "params": { 00:20:05.564 "name": "Nvme$subsystem", 00:20:05.564 "trtype": "$TEST_TRANSPORT", 00:20:05.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.564 "adrfam": "ipv4", 00:20:05.564 "trsvcid": "$NVMF_PORT", 00:20:05.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.564 "hdgst": ${hdgst:-false}, 00:20:05.564 "ddgst": ${ddgst:-false} 00:20:05.564 }, 00:20:05.564 "method": "bdev_nvme_attach_controller" 00:20:05.564 } 00:20:05.564 EOF 00:20:05.564 )") 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.564 { 00:20:05.564 "params": { 00:20:05.564 "name": "Nvme$subsystem", 00:20:05.564 "trtype": "$TEST_TRANSPORT", 00:20:05.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.564 "adrfam": "ipv4", 00:20:05.564 "trsvcid": "$NVMF_PORT", 00:20:05.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.564 "hdgst": ${hdgst:-false}, 00:20:05.564 "ddgst": ${ddgst:-false} 00:20:05.564 }, 00:20:05.564 "method": "bdev_nvme_attach_controller" 00:20:05.564 } 00:20:05.564 EOF 00:20:05.564 )") 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.564 { 00:20:05.564 "params": { 00:20:05.564 "name": "Nvme$subsystem", 00:20:05.564 "trtype": "$TEST_TRANSPORT", 00:20:05.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.564 "adrfam": "ipv4", 00:20:05.564 "trsvcid": "$NVMF_PORT", 00:20:05.564 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.564 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.564 "hdgst": ${hdgst:-false}, 00:20:05.564 "ddgst": ${ddgst:-false} 00:20:05.564 }, 00:20:05.564 "method": "bdev_nvme_attach_controller" 00:20:05.564 } 00:20:05.564 EOF 00:20:05.564 )") 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.564 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.564 { 00:20:05.564 "params": { 00:20:05.564 "name": "Nvme$subsystem", 00:20:05.564 "trtype": "$TEST_TRANSPORT", 00:20:05.564 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.564 "adrfam": "ipv4", 00:20:05.564 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.565 "hdgst": ${hdgst:-false}, 00:20:05.565 "ddgst": ${ddgst:-false} 00:20:05.565 }, 00:20:05.565 "method": "bdev_nvme_attach_controller" 00:20:05.565 } 00:20:05.565 EOF 00:20:05.565 )") 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:05.565 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:05.565 { 00:20:05.565 "params": { 00:20:05.565 "name": "Nvme$subsystem", 00:20:05.565 "trtype": "$TEST_TRANSPORT", 00:20:05.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.565 "adrfam": "ipv4", 00:20:05.565 "trsvcid": "$NVMF_PORT", 00:20:05.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.566 "hdgst": ${hdgst:-false}, 00:20:05.566 "ddgst": ${ddgst:-false} 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 } 00:20:05.566 EOF 00:20:05.566 )") 00:20:05.566 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:05.566 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:05.566 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:05.566 14:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme1", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme2", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme3", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme4", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme5", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme6", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme7", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme8", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme9", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.566 "adrfam": "ipv4", 00:20:05.566 "trsvcid": "4420", 00:20:05.566 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:05.566 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:05.566 "hdgst": false, 00:20:05.566 "ddgst": false 00:20:05.566 }, 00:20:05.566 "method": "bdev_nvme_attach_controller" 00:20:05.566 },{ 00:20:05.566 "params": { 00:20:05.566 "name": "Nvme10", 00:20:05.566 "trtype": "tcp", 00:20:05.566 "traddr": "10.0.0.2", 00:20:05.567 "adrfam": "ipv4", 00:20:05.567 "trsvcid": "4420", 00:20:05.567 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:05.567 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:05.567 "hdgst": false, 00:20:05.567 "ddgst": false 00:20:05.567 }, 00:20:05.567 "method": "bdev_nvme_attach_controller" 00:20:05.567 }' 00:20:05.567 [2024-07-25 14:21:35.124214] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:05.567 [2024-07-25 14:21:35.124291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid960362 ] 00:20:05.567 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.567 [2024-07-25 14:21:35.187014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.827 [2024-07-25 14:21:35.299692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.200 Running I/O for 10 seconds... 00:20:07.459 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:07.459 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:07.459 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:07.459 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.459 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:07.717 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 960179 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 960179 ']' 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 960179 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 960179 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 960179' 00:20:07.989 killing process with pid 960179 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 960179 00:20:07.989 14:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 960179 00:20:07.989 [2024-07-25 14:21:37.481253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.989 [2024-07-25 14:21:37.481813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.481997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.482146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4f920 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.484988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485013] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.990 [2024-07-25 14:21:37.485405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.485417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b4fde0 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487878] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.487996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.488495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b50780 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.490224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.490249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.490263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.490276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.991 [2024-07-25 14:21:37.490288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.490990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.491002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.491015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.491026] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51100 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.992 [2024-07-25 14:21:37.492629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.492993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.493192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b515c0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.993 [2024-07-25 14:21:37.494647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.494986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.495007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.495019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.495031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.495043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.495055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b51aa0 is same with the state(5) to be set 00:20:07.994 [2024-07-25 14:21:37.501160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.994 [2024-07-25 14:21:37.501655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.994 [2024-07-25 14:21:37.501671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.501980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.501995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.995 [2024-07-25 14:21:37.502794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.995 [2024-07-25 14:21:37.502810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.502978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.502993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:07.996 [2024-07-25 14:21:37.503396] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d1bd40 was disconnected and freed. reset controller. 00:20:07.996 [2024-07-25 14:21:37.503906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.503970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.503987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.996 [2024-07-25 14:21:37.504574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.996 [2024-07-25 14:21:37.504591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.504979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.504993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.997 [2024-07-25 14:21:37.505710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.997 [2024-07-25 14:21:37.505724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.505904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.505942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:07.998 [2024-07-25 14:21:37.506017] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c4ae00 was disconnected and freed. reset controller. 00:20:07.998 [2024-07-25 14:21:37.506507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.506983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.998 [2024-07-25 14:21:37.507422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.998 [2024-07-25 14:21:37.507436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.507974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.507990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:07.999 [2024-07-25 14:21:37.508356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:07.999 [2024-07-25 14:21:37.508372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.000 [2024-07-25 14:21:37.508386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.508402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.000 [2024-07-25 14:21:37.508416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.508443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.000 [2024-07-25 14:21:37.508458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.508474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.000 [2024-07-25 14:21:37.508491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.508508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.000 [2024-07-25 14:21:37.508523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.508558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:08.000 [2024-07-25 14:21:37.509175] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c3ffe0 was disconnected and freed. reset controller. 00:20:08.000 [2024-07-25 14:21:37.509321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2610 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.509496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509594] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0280 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.509666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58cd0 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.509853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.509963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.509977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67570 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.510034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf910 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.510217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510343] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4b50 is same with the state(5) to be set 00:20:08.000 [2024-07-25 14:21:37.510389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.000 [2024-07-25 14:21:37.510481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.000 [2024-07-25 14:21:37.510494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc3400 is same with the state(5) to be set 00:20:08.001 [2024-07-25 14:21:37.510551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcac80 is same with the state(5) to be set 00:20:08.001 [2024-07-25 14:21:37.510725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0830 is same with the state(5) to be set 00:20:08.001 [2024-07-25 14:21:37.510887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.510978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:08.001 [2024-07-25 14:21:37.510991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.511004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c757a0 is same with the state(5) to be set 00:20:08.001 [2024-07-25 14:21:37.512702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.512981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.512998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.001 [2024-07-25 14:21:37.513427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.001 [2024-07-25 14:21:37.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.513970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.513986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.002 [2024-07-25 14:21:37.514560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.002 [2024-07-25 14:21:37.514576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.514746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.514762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d28160 is same with the state(5) to be set 00:20:08.003 [2024-07-25 14:21:37.514847] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d28160 was disconnected and freed. reset controller. 00:20:08.003 [2024-07-25 14:21:37.518700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:08.003 [2024-07-25 14:21:37.518759] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c757a0 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:08.003 [2024-07-25 14:21:37.520473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:08.003 [2024-07-25 14:21:37.520496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:08.003 [2024-07-25 14:21:37.520526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcac80 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58cd0 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0280 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2610 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520653] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d67570 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcf910 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4b50 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc3400 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.520778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba0830 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.522153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.003 [2024-07-25 14:21:37.522188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c757a0 with addr=10.0.0.2, port=4420 00:20:08.003 [2024-07-25 14:21:37.522207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c757a0 is same with the state(5) to be set 00:20:08.003 [2024-07-25 14:21:37.522675] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.522766] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.522839] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.522909] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.523029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.003 [2024-07-25 14:21:37.523056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd0280 with addr=10.0.0.2, port=4420 00:20:08.003 [2024-07-25 14:21:37.523084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0280 is same with the state(5) to be set 00:20:08.003 [2024-07-25 14:21:37.523178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.003 [2024-07-25 14:21:37.523203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d58cd0 with addr=10.0.0.2, port=4420 00:20:08.003 [2024-07-25 14:21:37.523219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58cd0 is same with the state(5) to be set 00:20:08.003 [2024-07-25 14:21:37.523308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.003 [2024-07-25 14:21:37.523334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcac80 with addr=10.0.0.2, port=4420 00:20:08.003 [2024-07-25 14:21:37.523351] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcac80 is same with the state(5) to be set 00:20:08.003 [2024-07-25 14:21:37.523371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c757a0 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.523454] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.523519] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:08.003 [2024-07-25 14:21:37.523649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0280 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.523676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58cd0 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.523697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcac80 (9): Bad file descriptor 00:20:08.003 [2024-07-25 14:21:37.523716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:08.003 [2024-07-25 14:21:37.523730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:08.003 [2024-07-25 14:21:37.523747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:08.003 [2024-07-25 14:21:37.523815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.523839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.523869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.523886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.523903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.523918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.523933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.523948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.523964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.523978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.523994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.003 [2024-07-25 14:21:37.524229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.003 [2024-07-25 14:21:37.524244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.524981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.524997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.004 [2024-07-25 14:21:37.525170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.004 [2024-07-25 14:21:37.525185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.525817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.525832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d26cb0 is same with the state(5) to be set 00:20:08.005 [2024-07-25 14:21:37.525932] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d26cb0 was disconnected and freed. reset controller. 00:20:08.005 [2024-07-25 14:21:37.525993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.005 [2024-07-25 14:21:37.526019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:08.005 [2024-07-25 14:21:37.526034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:08.005 [2024-07-25 14:21:37.526048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:08.005 [2024-07-25 14:21:37.526073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:08.005 [2024-07-25 14:21:37.526089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:08.005 [2024-07-25 14:21:37.526103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:08.005 [2024-07-25 14:21:37.526121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:08.005 [2024-07-25 14:21:37.526135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:08.005 [2024-07-25 14:21:37.526148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:08.005 [2024-07-25 14:21:37.527418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.005 [2024-07-25 14:21:37.527441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.005 [2024-07-25 14:21:37.527454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.005 [2024-07-25 14:21:37.527468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:08.005 [2024-07-25 14:21:37.527650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.005 [2024-07-25 14:21:37.527677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc4b50 with addr=10.0.0.2, port=4420 00:20:08.005 [2024-07-25 14:21:37.527696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4b50 is same with the state(5) to be set 00:20:08.005 [2024-07-25 14:21:37.528022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4b50 (9): Bad file descriptor 00:20:08.005 [2024-07-25 14:21:37.528095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:08.005 [2024-07-25 14:21:37.528121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:08.005 [2024-07-25 14:21:37.528136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:08.005 [2024-07-25 14:21:37.528194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.005 [2024-07-25 14:21:37.530596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.530622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.530648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.530664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.530681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.530696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.530712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.530726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.530742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.005 [2024-07-25 14:21:37.530756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.005 [2024-07-25 14:21:37.530772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.530975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.530992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.006 [2024-07-25 14:21:37.531895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.006 [2024-07-25 14:21:37.531909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.531925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.531939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.531955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.531970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.531986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.532592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.532607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d25920 is same with the state(5) to be set 00:20:08.007 [2024-07-25 14:21:37.533976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.007 [2024-07-25 14:21:37.534389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.007 [2024-07-25 14:21:37.534405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.534983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.534997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.008 [2024-07-25 14:21:37.535431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.008 [2024-07-25 14:21:37.535450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.535964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c47ba0 is same with the state(5) to be set 00:20:08.009 [2024-07-25 14:21:37.537288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.009 [2024-07-25 14:21:37.537778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.009 [2024-07-25 14:21:37.537794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.537978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.537993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.010 [2024-07-25 14:21:37.538896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.010 [2024-07-25 14:21:37.538912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.538926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.538942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.538957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.538972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.538987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.539270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.539285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c48ae0 is same with the state(5) to be set 00:20:08.011 [2024-07-25 14:21:37.540612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.540970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.540984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.011 [2024-07-25 14:21:37.541358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.011 [2024-07-25 14:21:37.541373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.541984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.541998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.012 [2024-07-25 14:21:37.542462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.012 [2024-07-25 14:21:37.542477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.542493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.542508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.542523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.542539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.542553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.542568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.542582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.542597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b9c050 is same with the state(5) to be set 00:20:08.013 [2024-07-25 14:21:37.543920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.543943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.543965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.543988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.013 [2024-07-25 14:21:37.544756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.013 [2024-07-25 14:21:37.544771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.544984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.544998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.014 [2024-07-25 14:21:37.545854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.014 [2024-07-25 14:21:37.545870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:08.015 [2024-07-25 14:21:37.545884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:08.015 [2024-07-25 14:21:37.545898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c49a60 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.547836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:08.015 [2024-07-25 14:21:37.547871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:08.015 [2024-07-25 14:21:37.547891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:08.015 [2024-07-25 14:21:37.547911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:08.015 [2024-07-25 14:21:37.548049] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:08.015 task offset: 16768 on job bdev=Nvme5n1 fails 00:20:08.015 00:20:08.015 Latency(us) 00:20:08.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:08.015 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme1n1 ended in about 0.84 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme1n1 : 0.84 152.37 9.52 76.19 0.00 276688.34 33593.27 254765.13 00:20:08.015 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme2n1 ended in about 0.83 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme2n1 : 0.83 158.34 9.90 76.77 0.00 262957.87 13786.83 242337.56 00:20:08.015 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme3n1 ended in about 0.83 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme3n1 : 0.83 232.33 14.52 77.44 0.00 194852.79 16505.36 254765.13 00:20:08.015 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme4n1 ended in about 0.84 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme4n1 : 0.84 151.77 9.49 75.88 0.00 259515.48 15243.19 250104.79 00:20:08.015 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme5n1 ended in about 0.82 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme5n1 : 0.82 155.64 9.73 77.82 0.00 246360.49 14078.10 278066.82 00:20:08.015 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme6n1 ended in about 0.85 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme6n1 : 0.85 151.17 9.45 75.59 0.00 248523.03 19029.71 259425.47 00:20:08.015 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme7n1 ended in about 0.85 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme7n1 : 0.85 150.59 9.41 75.29 0.00 243791.64 19320.98 236123.78 00:20:08.015 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme8n1 ended in about 0.85 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme8n1 : 0.85 150.00 9.38 75.00 0.00 238910.07 33787.45 239230.67 00:20:08.015 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme9n1 ended in about 0.82 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme9n1 : 0.82 155.38 9.71 77.69 0.00 223166.26 12718.84 260978.92 00:20:08.015 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:08.015 Job: Nvme10n1 ended in about 0.82 seconds with error 00:20:08.015 Verification LBA range: start 0x0 length 0x400 00:20:08.015 Nvme10n1 : 0.82 155.16 9.70 77.58 0.00 217729.20 36700.16 279620.27 00:20:08.015 =================================================================================================================== 00:20:08.015 Total : 1612.75 100.80 765.26 0.00 239799.54 12718.84 279620.27 00:20:08.015 [2024-07-25 14:21:37.576195] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:08.015 [2024-07-25 14:21:37.576273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:08.015 [2024-07-25 14:21:37.576577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.576615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba0830 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.576637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0830 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.576718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.576757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc3400 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.576774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc3400 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.576865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.576892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcf910 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.576908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcf910 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.577001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.577027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16a2610 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.577044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16a2610 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.578472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:08.015 [2024-07-25 14:21:37.578501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:08.015 [2024-07-25 14:21:37.578522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:08.015 [2024-07-25 14:21:37.578552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:08.015 [2024-07-25 14:21:37.578568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:08.015 [2024-07-25 14:21:37.578720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.578747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d67570 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.578763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d67570 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.578791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba0830 (9): Bad file descriptor 00:20:08.015 [2024-07-25 14:21:37.578815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc3400 (9): Bad file descriptor 00:20:08.015 [2024-07-25 14:21:37.578834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcf910 (9): Bad file descriptor 00:20:08.015 [2024-07-25 14:21:37.578853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16a2610 (9): Bad file descriptor 00:20:08.015 [2024-07-25 14:21:37.578904] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:08.015 [2024-07-25 14:21:37.578929] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:08.015 [2024-07-25 14:21:37.578953] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:08.015 [2024-07-25 14:21:37.578974] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:08.015 [2024-07-25 14:21:37.579156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.579184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c757a0 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.579200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c757a0 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.579294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcac80 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.579337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcac80 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.579424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d58cd0 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.579465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d58cd0 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.579537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.579562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bd0280 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.579587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd0280 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.579660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:08.015 [2024-07-25 14:21:37.579684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc4b50 with addr=10.0.0.2, port=4420 00:20:08.015 [2024-07-25 14:21:37.579700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc4b50 is same with the state(5) to be set 00:20:08.015 [2024-07-25 14:21:37.579720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d67570 (9): Bad file descriptor 00:20:08.015 [2024-07-25 14:21:37.579738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:08.015 [2024-07-25 14:21:37.579752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.579770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:08.016 [2024-07-25 14:21:37.579790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.579804] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.579817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:08.016 [2024-07-25 14:21:37.579835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.579849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.579863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:08.016 [2024-07-25 14:21:37.579880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.579894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.579907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:08.016 [2024-07-25 14:21:37.579995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c757a0 (9): Bad file descriptor 00:20:08.016 [2024-07-25 14:21:37.580089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcac80 (9): Bad file descriptor 00:20:08.016 [2024-07-25 14:21:37.580109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d58cd0 (9): Bad file descriptor 00:20:08.016 [2024-07-25 14:21:37.580127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bd0280 (9): Bad file descriptor 00:20:08.016 [2024-07-25 14:21:37.580150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc4b50 (9): Bad file descriptor 00:20:08.016 [2024-07-25 14:21:37.580166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580180] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580397] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:08.016 [2024-07-25 14:21:37.580439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:08.016 [2024-07-25 14:21:37.580452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:08.016 [2024-07-25 14:21:37.580489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.016 [2024-07-25 14:21:37.580542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:08.586 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:08.586 14:21:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 960362 00:20:09.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (960362) - No such process 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.525 rmmod nvme_tcp 00:20:09.525 rmmod nvme_fabrics 00:20:09.525 rmmod nvme_keyring 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:09.525 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:09.526 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.526 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.526 14:21:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.068 00:20:12.068 real 0m7.907s 00:20:12.068 user 0m19.841s 00:20:12.068 sys 0m1.438s 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:12.068 ************************************ 00:20:12.068 END TEST nvmf_shutdown_tc3 00:20:12.068 ************************************ 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:12.068 00:20:12.068 real 0m27.478s 00:20:12.068 user 1m16.409s 00:20:12.068 sys 0m6.123s 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:12.068 ************************************ 00:20:12.068 END TEST nvmf_shutdown 00:20:12.068 ************************************ 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:12.068 00:20:12.068 real 10m19.435s 00:20:12.068 user 24m34.096s 00:20:12.068 sys 2m31.519s 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:12.068 14:21:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:12.068 ************************************ 00:20:12.068 END TEST nvmf_target_extra 00:20:12.068 ************************************ 00:20:12.068 14:21:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:12.068 14:21:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:12.068 14:21:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.068 14:21:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.068 14:21:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:12.068 ************************************ 00:20:12.068 START TEST nvmf_host 00:20:12.068 ************************************ 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:12.068 * Looking for test storage... 00:20:12.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.068 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.069 ************************************ 00:20:12.069 START TEST nvmf_multicontroller 00:20:12.069 ************************************ 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:12.069 * Looking for test storage... 00:20:12.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.069 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.070 14:21:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:13.990 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:13.990 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:13.990 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.990 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:13.991 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:13.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:13.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:13.991 00:20:13.991 --- 10.0.0.2 ping statistics --- 00:20:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.991 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:13.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:13.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:20:13.991 00:20:13.991 --- 10.0.0.1 ping statistics --- 00:20:13.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:13.991 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=962904 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 962904 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 962904 ']' 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.991 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.991 [2024-07-25 14:21:43.627208] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:13.991 [2024-07-25 14:21:43.627296] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.250 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.250 [2024-07-25 14:21:43.693618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:14.250 [2024-07-25 14:21:43.807875] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.250 [2024-07-25 14:21:43.807940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.251 [2024-07-25 14:21:43.807953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.251 [2024-07-25 14:21:43.807964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.251 [2024-07-25 14:21:43.807974] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.251 [2024-07-25 14:21:43.808074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.251 [2024-07-25 14:21:43.808193] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.251 [2024-07-25 14:21:43.808196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 [2024-07-25 14:21:43.958479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 Malloc0 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 [2024-07-25 14:21:44.022595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 [2024-07-25 14:21:44.030436] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 Malloc1 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:14.509 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=962930 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 962930 /var/tmp/bdevperf.sock 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 962930 ']' 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:14.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:14.510 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 NVMe0n1 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.077 1 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:15.077 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.078 request: 00:20:15.078 { 00:20:15.078 "name": "NVMe0", 00:20:15.078 "trtype": "tcp", 00:20:15.078 "traddr": "10.0.0.2", 00:20:15.078 "adrfam": "ipv4", 00:20:15.078 "trsvcid": "4420", 00:20:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.078 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:15.078 "hostaddr": "10.0.0.2", 00:20:15.078 "hostsvcid": "60000", 00:20:15.078 "prchk_reftag": false, 00:20:15.078 "prchk_guard": false, 00:20:15.078 "hdgst": false, 00:20:15.078 "ddgst": false, 00:20:15.078 "method": "bdev_nvme_attach_controller", 00:20:15.078 "req_id": 1 00:20:15.078 } 00:20:15.078 Got JSON-RPC error response 00:20:15.078 response: 00:20:15.078 { 00:20:15.078 "code": -114, 00:20:15.078 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:15.078 } 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.078 request: 00:20:15.078 { 00:20:15.078 "name": "NVMe0", 00:20:15.078 "trtype": "tcp", 00:20:15.078 "traddr": "10.0.0.2", 00:20:15.078 "adrfam": "ipv4", 00:20:15.078 "trsvcid": "4420", 00:20:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:15.078 "hostaddr": "10.0.0.2", 00:20:15.078 "hostsvcid": "60000", 00:20:15.078 "prchk_reftag": false, 00:20:15.078 "prchk_guard": false, 00:20:15.078 "hdgst": false, 00:20:15.078 "ddgst": false, 00:20:15.078 "method": "bdev_nvme_attach_controller", 00:20:15.078 "req_id": 1 00:20:15.078 } 00:20:15.078 Got JSON-RPC error response 00:20:15.078 response: 00:20:15.078 { 00:20:15.078 "code": -114, 00:20:15.078 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:15.078 } 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.078 request: 00:20:15.078 { 00:20:15.078 "name": "NVMe0", 00:20:15.078 "trtype": "tcp", 00:20:15.078 "traddr": "10.0.0.2", 00:20:15.078 "adrfam": "ipv4", 00:20:15.078 "trsvcid": "4420", 00:20:15.078 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.078 "hostaddr": "10.0.0.2", 00:20:15.078 "hostsvcid": "60000", 00:20:15.078 "prchk_reftag": false, 00:20:15.078 "prchk_guard": false, 00:20:15.078 "hdgst": false, 00:20:15.078 "ddgst": false, 00:20:15.078 "multipath": "disable", 00:20:15.078 "method": "bdev_nvme_attach_controller", 00:20:15.078 "req_id": 1 00:20:15.078 } 00:20:15.078 Got JSON-RPC error response 00:20:15.078 response: 00:20:15.078 { 00:20:15.078 "code": -114, 00:20:15.078 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:15.078 } 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.078 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.339 request: 00:20:15.339 { 00:20:15.339 "name": "NVMe0", 00:20:15.339 "trtype": "tcp", 00:20:15.339 "traddr": "10.0.0.2", 00:20:15.339 "adrfam": "ipv4", 00:20:15.339 "trsvcid": "4420", 00:20:15.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.339 "hostaddr": "10.0.0.2", 00:20:15.339 "hostsvcid": "60000", 00:20:15.339 "prchk_reftag": false, 00:20:15.339 "prchk_guard": false, 00:20:15.339 "hdgst": false, 00:20:15.339 "ddgst": false, 00:20:15.339 "multipath": "failover", 00:20:15.339 "method": "bdev_nvme_attach_controller", 00:20:15.339 "req_id": 1 00:20:15.339 } 00:20:15.339 Got JSON-RPC error response 00:20:15.339 response: 00:20:15.339 { 00:20:15.339 "code": -114, 00:20:15.339 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:15.339 } 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.339 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.339 14:21:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.597 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:15.597 14:21:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.972 0 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 962930 ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962930' 00:20:16.972 killing process with pid 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 962930 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:16.972 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:16.972 [2024-07-25 14:21:44.137126] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:16.972 [2024-07-25 14:21:44.137229] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid962930 ] 00:20:16.972 EAL: No free 2048 kB hugepages reported on node 1 00:20:16.972 [2024-07-25 14:21:44.200494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.972 [2024-07-25 14:21:44.309251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.972 [2024-07-25 14:21:45.033626] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 659286d7-32fa-4b84-8a25-4aad266b6a5e already exists 00:20:16.972 [2024-07-25 14:21:45.033669] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:659286d7-32fa-4b84-8a25-4aad266b6a5e alias for bdev NVMe1n1 00:20:16.972 [2024-07-25 14:21:45.033685] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:16.972 Running I/O for 1 seconds... 00:20:16.972 00:20:16.972 Latency(us) 00:20:16.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.972 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:16.972 NVMe0n1 : 1.01 19175.05 74.90 0.00 0.00 6664.91 2075.31 11990.66 00:20:16.972 =================================================================================================================== 00:20:16.972 Total : 19175.05 74.90 0.00 0.00 6664.91 2075.31 11990.66 00:20:16.972 Received shutdown signal, test time was about 1.000000 seconds 00:20:16.972 00:20:16.972 Latency(us) 00:20:16.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.972 =================================================================================================================== 00:20:16.972 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.972 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.972 rmmod nvme_tcp 00:20:16.972 rmmod nvme_fabrics 00:20:16.972 rmmod nvme_keyring 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 962904 ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 962904 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 962904 ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 962904 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 962904 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 962904' 00:20:16.972 killing process with pid 962904 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 962904 00:20:16.972 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 962904 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.542 14:21:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.448 00:20:19.448 real 0m7.596s 00:20:19.448 user 0m12.418s 00:20:19.448 sys 0m2.266s 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:19.448 ************************************ 00:20:19.448 END TEST nvmf_multicontroller 00:20:19.448 ************************************ 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:19.448 14:21:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:19.448 ************************************ 00:20:19.448 START TEST nvmf_aer 00:20:19.448 ************************************ 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:19.448 * Looking for test storage... 00:20:19.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.448 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.449 14:21:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:21.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:21.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:21.982 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:21.982 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:21.982 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:21.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:20:21.983 00:20:21.983 --- 10.0.0.2 ping statistics --- 00:20:21.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.983 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:21.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:21.983 00:20:21.983 --- 10.0.0.1 ping statistics --- 00:20:21.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.983 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=965145 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 965145 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 965145 ']' 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 [2024-07-25 14:21:51.281508] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:21.983 [2024-07-25 14:21:51.281596] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.983 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.983 [2024-07-25 14:21:51.351888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.983 [2024-07-25 14:21:51.458468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.983 [2024-07-25 14:21:51.458523] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.983 [2024-07-25 14:21:51.458543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.983 [2024-07-25 14:21:51.458554] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.983 [2024-07-25 14:21:51.458563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.983 [2024-07-25 14:21:51.458641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.983 [2024-07-25 14:21:51.458751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.983 [2024-07-25 14:21:51.458864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.983 [2024-07-25 14:21:51.458869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.983 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:21.983 [2024-07-25 14:21:51.626573] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.243 Malloc0 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.243 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.244 [2024-07-25 14:21:51.680482] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.244 [ 00:20:22.244 { 00:20:22.244 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.244 "subtype": "Discovery", 00:20:22.244 "listen_addresses": [], 00:20:22.244 "allow_any_host": true, 00:20:22.244 "hosts": [] 00:20:22.244 }, 00:20:22.244 { 00:20:22.244 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.244 "subtype": "NVMe", 00:20:22.244 "listen_addresses": [ 00:20:22.244 { 00:20:22.244 "trtype": "TCP", 00:20:22.244 "adrfam": "IPv4", 00:20:22.244 "traddr": "10.0.0.2", 00:20:22.244 "trsvcid": "4420" 00:20:22.244 } 00:20:22.244 ], 00:20:22.244 "allow_any_host": true, 00:20:22.244 "hosts": [], 00:20:22.244 "serial_number": "SPDK00000000000001", 00:20:22.244 "model_number": "SPDK bdev Controller", 00:20:22.244 "max_namespaces": 2, 00:20:22.244 "min_cntlid": 1, 00:20:22.244 "max_cntlid": 65519, 00:20:22.244 "namespaces": [ 00:20:22.244 { 00:20:22.244 "nsid": 1, 00:20:22.244 "bdev_name": "Malloc0", 00:20:22.244 "name": "Malloc0", 00:20:22.244 "nguid": "D1E2C5DD695045B981F28F223F968037", 00:20:22.244 "uuid": "d1e2c5dd-6950-45b9-81f2-8f223f968037" 00:20:22.244 } 00:20:22.244 ] 00:20:22.244 } 00:20:22.244 ] 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=965290 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:22.244 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:22.244 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:22.502 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.502 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:20:22.502 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:20:22.502 14:21:51 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 3 -lt 200 ']' 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=4 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.502 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.760 Malloc1 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.760 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.760 [ 00:20:22.760 { 00:20:22.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.760 "subtype": "Discovery", 00:20:22.760 "listen_addresses": [], 00:20:22.760 "allow_any_host": true, 00:20:22.760 "hosts": [] 00:20:22.760 }, 00:20:22.760 { 00:20:22.760 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.760 "subtype": "NVMe", 00:20:22.760 "listen_addresses": [ 00:20:22.760 { 00:20:22.760 "trtype": "TCP", 00:20:22.760 "adrfam": "IPv4", 00:20:22.760 "traddr": "10.0.0.2", 00:20:22.760 "trsvcid": "4420" 00:20:22.760 } 00:20:22.760 ], 00:20:22.760 "allow_any_host": true, 00:20:22.760 "hosts": [], 00:20:22.760 "serial_number": "SPDK00000000000001", 00:20:22.760 "model_number": "SPDK bdev Controller", 00:20:22.760 "max_namespaces": 2, 00:20:22.760 "min_cntlid": 1, 00:20:22.760 "max_cntlid": 65519, 00:20:22.760 "namespaces": [ 00:20:22.760 { 00:20:22.760 "nsid": 1, 00:20:22.760 "bdev_name": "Malloc0", 00:20:22.760 "name": "Malloc0", 00:20:22.760 "nguid": "D1E2C5DD695045B981F28F223F968037", 00:20:22.760 "uuid": "d1e2c5dd-6950-45b9-81f2-8f223f968037" 00:20:22.760 }, 00:20:22.760 { 00:20:22.760 "nsid": 2, 00:20:22.760 "bdev_name": "Malloc1", 00:20:22.760 "name": "Malloc1", 00:20:22.761 "nguid": "A665F328C7E149C898EED025D7BEB117", 00:20:22.761 "uuid": "a665f328-c7e1-49c8-98ee-d025d7beb117" 00:20:22.761 } 00:20:22.761 ] 00:20:22.761 } 00:20:22.761 ] 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 965290 00:20:22.761 Asynchronous Event Request test 00:20:22.761 Attaching to 10.0.0.2 00:20:22.761 Attached to 10.0.0.2 00:20:22.761 Registering asynchronous event callbacks... 00:20:22.761 Starting namespace attribute notice tests for all controllers... 00:20:22.761 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:22.761 aer_cb - Changed Namespace 00:20:22.761 Cleaning up... 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:22.761 rmmod nvme_tcp 00:20:22.761 rmmod nvme_fabrics 00:20:22.761 rmmod nvme_keyring 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 965145 ']' 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 965145 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 965145 ']' 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 965145 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 965145 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 965145' 00:20:22.761 killing process with pid 965145 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 965145 00:20:22.761 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 965145 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.019 14:21:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:25.556 00:20:25.556 real 0m5.624s 00:20:25.556 user 0m5.056s 00:20:25.556 sys 0m1.988s 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:25.556 ************************************ 00:20:25.556 END TEST nvmf_aer 00:20:25.556 ************************************ 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:25.556 ************************************ 00:20:25.556 START TEST nvmf_async_init 00:20:25.556 ************************************ 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:25.556 * Looking for test storage... 00:20:25.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.556 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a8111bc116bd40a59c36541ea0cc7f7e 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.557 14:21:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:27.461 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:27.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:27.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:27.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:27.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:27.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:20:27.462 00:20:27.462 --- 10.0.0.2 ping statistics --- 00:20:27.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.462 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:27.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:20:27.462 00:20:27.462 --- 10.0.0.1 ping statistics --- 00:20:27.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.462 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:27.462 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=967250 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 967250 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 967250 ']' 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:27.463 14:21:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.463 [2024-07-25 14:21:57.015482] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:27.463 [2024-07-25 14:21:57.015563] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:27.463 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.463 [2024-07-25 14:21:57.087908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.723 [2024-07-25 14:21:57.198070] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:27.723 [2024-07-25 14:21:57.198136] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:27.723 [2024-07-25 14:21:57.198150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:27.723 [2024-07-25 14:21:57.198162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:27.723 [2024-07-25 14:21:57.198172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:27.723 [2024-07-25 14:21:57.198200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 [2024-07-25 14:21:57.339259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 null0 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a8111bc116bd40a59c36541ea0cc7f7e 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.723 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.724 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:27.983 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.983 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.984 [2024-07-25 14:21:57.379547] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.984 nvme0n1 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.984 [ 00:20:27.984 { 00:20:27.984 "name": "nvme0n1", 00:20:27.984 "aliases": [ 00:20:27.984 "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e" 00:20:27.984 ], 00:20:27.984 "product_name": "NVMe disk", 00:20:27.984 "block_size": 512, 00:20:27.984 "num_blocks": 2097152, 00:20:27.984 "uuid": "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e", 00:20:27.984 "assigned_rate_limits": { 00:20:27.984 "rw_ios_per_sec": 0, 00:20:27.984 "rw_mbytes_per_sec": 0, 00:20:27.984 "r_mbytes_per_sec": 0, 00:20:27.984 "w_mbytes_per_sec": 0 00:20:27.984 }, 00:20:27.984 "claimed": false, 00:20:27.984 "zoned": false, 00:20:27.984 "supported_io_types": { 00:20:27.984 "read": true, 00:20:27.984 "write": true, 00:20:27.984 "unmap": false, 00:20:27.984 "flush": true, 00:20:27.984 "reset": true, 00:20:27.984 "nvme_admin": true, 00:20:27.984 "nvme_io": true, 00:20:27.984 "nvme_io_md": false, 00:20:27.984 "write_zeroes": true, 00:20:27.984 "zcopy": false, 00:20:27.984 "get_zone_info": false, 00:20:27.984 "zone_management": false, 00:20:27.984 "zone_append": false, 00:20:27.984 "compare": true, 00:20:27.984 "compare_and_write": true, 00:20:27.984 "abort": true, 00:20:27.984 "seek_hole": false, 00:20:27.984 "seek_data": false, 00:20:27.984 "copy": true, 00:20:27.984 "nvme_iov_md": false 00:20:27.984 }, 00:20:27.984 "memory_domains": [ 00:20:27.984 { 00:20:27.984 "dma_device_id": "system", 00:20:27.984 "dma_device_type": 1 00:20:27.984 } 00:20:27.984 ], 00:20:27.984 "driver_specific": { 00:20:27.984 "nvme": [ 00:20:27.984 { 00:20:27.984 "trid": { 00:20:27.984 "trtype": "TCP", 00:20:27.984 "adrfam": "IPv4", 00:20:27.984 "traddr": "10.0.0.2", 00:20:27.984 "trsvcid": "4420", 00:20:27.984 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:27.984 }, 00:20:27.984 "ctrlr_data": { 00:20:27.984 "cntlid": 1, 00:20:27.984 "vendor_id": "0x8086", 00:20:27.984 "model_number": "SPDK bdev Controller", 00:20:27.984 "serial_number": "00000000000000000000", 00:20:27.984 "firmware_revision": "24.09", 00:20:27.984 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:27.984 "oacs": { 00:20:27.984 "security": 0, 00:20:27.984 "format": 0, 00:20:27.984 "firmware": 0, 00:20:27.984 "ns_manage": 0 00:20:27.984 }, 00:20:27.984 "multi_ctrlr": true, 00:20:27.984 "ana_reporting": false 00:20:27.984 }, 00:20:27.984 "vs": { 00:20:27.984 "nvme_version": "1.3" 00:20:27.984 }, 00:20:27.984 "ns_data": { 00:20:27.984 "id": 1, 00:20:27.984 "can_share": true 00:20:27.984 } 00:20:27.984 } 00:20:27.984 ], 00:20:27.984 "mp_policy": "active_passive" 00:20:27.984 } 00:20:27.984 } 00:20:27.984 ] 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.984 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.984 [2024-07-25 14:21:57.628191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:27.984 [2024-07-25 14:21:57.628285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbf71d0 (9): Bad file descriptor 00:20:28.243 [2024-07-25 14:21:57.760197] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 [ 00:20:28.243 { 00:20:28.243 "name": "nvme0n1", 00:20:28.243 "aliases": [ 00:20:28.243 "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e" 00:20:28.243 ], 00:20:28.243 "product_name": "NVMe disk", 00:20:28.243 "block_size": 512, 00:20:28.243 "num_blocks": 2097152, 00:20:28.243 "uuid": "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e", 00:20:28.243 "assigned_rate_limits": { 00:20:28.243 "rw_ios_per_sec": 0, 00:20:28.243 "rw_mbytes_per_sec": 0, 00:20:28.243 "r_mbytes_per_sec": 0, 00:20:28.243 "w_mbytes_per_sec": 0 00:20:28.243 }, 00:20:28.243 "claimed": false, 00:20:28.243 "zoned": false, 00:20:28.243 "supported_io_types": { 00:20:28.243 "read": true, 00:20:28.243 "write": true, 00:20:28.243 "unmap": false, 00:20:28.243 "flush": true, 00:20:28.243 "reset": true, 00:20:28.243 "nvme_admin": true, 00:20:28.243 "nvme_io": true, 00:20:28.243 "nvme_io_md": false, 00:20:28.243 "write_zeroes": true, 00:20:28.243 "zcopy": false, 00:20:28.243 "get_zone_info": false, 00:20:28.243 "zone_management": false, 00:20:28.243 "zone_append": false, 00:20:28.243 "compare": true, 00:20:28.243 "compare_and_write": true, 00:20:28.243 "abort": true, 00:20:28.243 "seek_hole": false, 00:20:28.243 "seek_data": false, 00:20:28.243 "copy": true, 00:20:28.243 "nvme_iov_md": false 00:20:28.243 }, 00:20:28.243 "memory_domains": [ 00:20:28.243 { 00:20:28.243 "dma_device_id": "system", 00:20:28.243 "dma_device_type": 1 00:20:28.243 } 00:20:28.243 ], 00:20:28.243 "driver_specific": { 00:20:28.243 "nvme": [ 00:20:28.243 { 00:20:28.243 "trid": { 00:20:28.243 "trtype": "TCP", 00:20:28.243 "adrfam": "IPv4", 00:20:28.243 "traddr": "10.0.0.2", 00:20:28.243 "trsvcid": "4420", 00:20:28.243 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:28.243 }, 00:20:28.243 "ctrlr_data": { 00:20:28.243 "cntlid": 2, 00:20:28.243 "vendor_id": "0x8086", 00:20:28.243 "model_number": "SPDK bdev Controller", 00:20:28.243 "serial_number": "00000000000000000000", 00:20:28.243 "firmware_revision": "24.09", 00:20:28.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:28.243 "oacs": { 00:20:28.243 "security": 0, 00:20:28.243 "format": 0, 00:20:28.243 "firmware": 0, 00:20:28.243 "ns_manage": 0 00:20:28.243 }, 00:20:28.243 "multi_ctrlr": true, 00:20:28.243 "ana_reporting": false 00:20:28.243 }, 00:20:28.243 "vs": { 00:20:28.243 "nvme_version": "1.3" 00:20:28.243 }, 00:20:28.243 "ns_data": { 00:20:28.243 "id": 1, 00:20:28.243 "can_share": true 00:20:28.243 } 00:20:28.243 } 00:20:28.243 ], 00:20:28.243 "mp_policy": "active_passive" 00:20:28.243 } 00:20:28.243 } 00:20:28.243 ] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.cAg1KJVkCw 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.cAg1KJVkCw 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 [2024-07-25 14:21:57.812837] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:28.243 [2024-07-25 14:21:57.812967] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cAg1KJVkCw 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 [2024-07-25 14:21:57.820848] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.cAg1KJVkCw 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.243 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.243 [2024-07-25 14:21:57.828876] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:28.243 [2024-07-25 14:21:57.828932] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:28.502 nvme0n1 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.503 [ 00:20:28.503 { 00:20:28.503 "name": "nvme0n1", 00:20:28.503 "aliases": [ 00:20:28.503 "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e" 00:20:28.503 ], 00:20:28.503 "product_name": "NVMe disk", 00:20:28.503 "block_size": 512, 00:20:28.503 "num_blocks": 2097152, 00:20:28.503 "uuid": "a8111bc1-16bd-40a5-9c36-541ea0cc7f7e", 00:20:28.503 "assigned_rate_limits": { 00:20:28.503 "rw_ios_per_sec": 0, 00:20:28.503 "rw_mbytes_per_sec": 0, 00:20:28.503 "r_mbytes_per_sec": 0, 00:20:28.503 "w_mbytes_per_sec": 0 00:20:28.503 }, 00:20:28.503 "claimed": false, 00:20:28.503 "zoned": false, 00:20:28.503 "supported_io_types": { 00:20:28.503 "read": true, 00:20:28.503 "write": true, 00:20:28.503 "unmap": false, 00:20:28.503 "flush": true, 00:20:28.503 "reset": true, 00:20:28.503 "nvme_admin": true, 00:20:28.503 "nvme_io": true, 00:20:28.503 "nvme_io_md": false, 00:20:28.503 "write_zeroes": true, 00:20:28.503 "zcopy": false, 00:20:28.503 "get_zone_info": false, 00:20:28.503 "zone_management": false, 00:20:28.503 "zone_append": false, 00:20:28.503 "compare": true, 00:20:28.503 "compare_and_write": true, 00:20:28.503 "abort": true, 00:20:28.503 "seek_hole": false, 00:20:28.503 "seek_data": false, 00:20:28.503 "copy": true, 00:20:28.503 "nvme_iov_md": false 00:20:28.503 }, 00:20:28.503 "memory_domains": [ 00:20:28.503 { 00:20:28.503 "dma_device_id": "system", 00:20:28.503 "dma_device_type": 1 00:20:28.503 } 00:20:28.503 ], 00:20:28.503 "driver_specific": { 00:20:28.503 "nvme": [ 00:20:28.503 { 00:20:28.503 "trid": { 00:20:28.503 "trtype": "TCP", 00:20:28.503 "adrfam": "IPv4", 00:20:28.503 "traddr": "10.0.0.2", 00:20:28.503 "trsvcid": "4421", 00:20:28.503 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:28.503 }, 00:20:28.503 "ctrlr_data": { 00:20:28.503 "cntlid": 3, 00:20:28.503 "vendor_id": "0x8086", 00:20:28.503 "model_number": "SPDK bdev Controller", 00:20:28.503 "serial_number": "00000000000000000000", 00:20:28.503 "firmware_revision": "24.09", 00:20:28.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:28.503 "oacs": { 00:20:28.503 "security": 0, 00:20:28.503 "format": 0, 00:20:28.503 "firmware": 0, 00:20:28.503 "ns_manage": 0 00:20:28.503 }, 00:20:28.503 "multi_ctrlr": true, 00:20:28.503 "ana_reporting": false 00:20:28.503 }, 00:20:28.503 "vs": { 00:20:28.503 "nvme_version": "1.3" 00:20:28.503 }, 00:20:28.503 "ns_data": { 00:20:28.503 "id": 1, 00:20:28.503 "can_share": true 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ], 00:20:28.503 "mp_policy": "active_passive" 00:20:28.503 } 00:20:28.503 } 00:20:28.503 ] 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.cAg1KJVkCw 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:28.503 rmmod nvme_tcp 00:20:28.503 rmmod nvme_fabrics 00:20:28.503 rmmod nvme_keyring 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 967250 ']' 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 967250 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 967250 ']' 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 967250 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:28.503 14:21:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 967250 00:20:28.503 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:28.503 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:28.503 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 967250' 00:20:28.503 killing process with pid 967250 00:20:28.503 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 967250 00:20:28.503 [2024-07-25 14:21:58.007272] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:28.503 [2024-07-25 14:21:58.007310] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:28.503 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 967250 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:28.762 14:21:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.666 00:20:30.666 real 0m5.600s 00:20:30.666 user 0m2.108s 00:20:30.666 sys 0m1.876s 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:30.666 ************************************ 00:20:30.666 END TEST nvmf_async_init 00:20:30.666 ************************************ 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.666 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.925 ************************************ 00:20:30.925 START TEST dma 00:20:30.926 ************************************ 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:30.926 * Looking for test storage... 00:20:30.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:30.926 00:20:30.926 real 0m0.072s 00:20:30.926 user 0m0.033s 00:20:30.926 sys 0m0.044s 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:30.926 ************************************ 00:20:30.926 END TEST dma 00:20:30.926 ************************************ 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:30.926 ************************************ 00:20:30.926 START TEST nvmf_identify 00:20:30.926 ************************************ 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:30.926 * Looking for test storage... 00:20:30.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.926 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.927 14:22:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:33.462 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:33.462 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:33.462 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:33.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:33.462 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:33.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:33.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:20:33.463 00:20:33.463 --- 10.0.0.2 ping statistics --- 00:20:33.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.463 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:33.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:33.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:20:33.463 00:20:33.463 --- 10.0.0.1 ping statistics --- 00:20:33.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:33.463 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=969396 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 969396 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 969396 ']' 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:33.463 14:22:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.463 [2024-07-25 14:22:02.767133] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:33.463 [2024-07-25 14:22:02.767219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.463 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.463 [2024-07-25 14:22:02.838828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:33.463 [2024-07-25 14:22:02.951352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.463 [2024-07-25 14:22:02.951410] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.463 [2024-07-25 14:22:02.951424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.463 [2024-07-25 14:22:02.951436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.463 [2024-07-25 14:22:02.951446] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.463 [2024-07-25 14:22:02.951508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.463 [2024-07-25 14:22:02.951566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.463 [2024-07-25 14:22:02.951634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.463 [2024-07-25 14:22:02.951637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.463 [2024-07-25 14:22:03.085549] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.463 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.722 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.722 Malloc0 00:20:33.722 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 [2024-07-25 14:22:03.156835] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.723 [ 00:20:33.723 { 00:20:33.723 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.723 "subtype": "Discovery", 00:20:33.723 "listen_addresses": [ 00:20:33.723 { 00:20:33.723 "trtype": "TCP", 00:20:33.723 "adrfam": "IPv4", 00:20:33.723 "traddr": "10.0.0.2", 00:20:33.723 "trsvcid": "4420" 00:20:33.723 } 00:20:33.723 ], 00:20:33.723 "allow_any_host": true, 00:20:33.723 "hosts": [] 00:20:33.723 }, 00:20:33.723 { 00:20:33.723 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.723 "subtype": "NVMe", 00:20:33.723 "listen_addresses": [ 00:20:33.723 { 00:20:33.723 "trtype": "TCP", 00:20:33.723 "adrfam": "IPv4", 00:20:33.723 "traddr": "10.0.0.2", 00:20:33.723 "trsvcid": "4420" 00:20:33.723 } 00:20:33.723 ], 00:20:33.723 "allow_any_host": true, 00:20:33.723 "hosts": [], 00:20:33.723 "serial_number": "SPDK00000000000001", 00:20:33.723 "model_number": "SPDK bdev Controller", 00:20:33.723 "max_namespaces": 32, 00:20:33.723 "min_cntlid": 1, 00:20:33.723 "max_cntlid": 65519, 00:20:33.723 "namespaces": [ 00:20:33.723 { 00:20:33.723 "nsid": 1, 00:20:33.723 "bdev_name": "Malloc0", 00:20:33.723 "name": "Malloc0", 00:20:33.723 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:33.723 "eui64": "ABCDEF0123456789", 00:20:33.723 "uuid": "9b22b228-7e4b-4486-8007-ab618cd260c4" 00:20:33.723 } 00:20:33.723 ] 00:20:33.723 } 00:20:33.723 ] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.723 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:33.723 [2024-07-25 14:22:03.197449] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:33.723 [2024-07-25 14:22:03.197496] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969500 ] 00:20:33.723 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.723 [2024-07-25 14:22:03.231322] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:33.723 [2024-07-25 14:22:03.231407] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:33.723 [2024-07-25 14:22:03.231417] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:33.723 [2024-07-25 14:22:03.231434] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:33.723 [2024-07-25 14:22:03.231448] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:33.723 [2024-07-25 14:22:03.231754] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:33.723 [2024-07-25 14:22:03.231804] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x8a3540 0 00:20:33.723 [2024-07-25 14:22:03.238071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:33.723 [2024-07-25 14:22:03.238099] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:33.723 [2024-07-25 14:22:03.238110] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:33.723 [2024-07-25 14:22:03.238116] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:33.723 [2024-07-25 14:22:03.238168] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.238182] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.238190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.723 [2024-07-25 14:22:03.238209] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:33.723 [2024-07-25 14:22:03.238237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.723 [2024-07-25 14:22:03.245071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.723 [2024-07-25 14:22:03.245089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.723 [2024-07-25 14:22:03.245096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.723 [2024-07-25 14:22:03.245121] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:33.723 [2024-07-25 14:22:03.245149] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:33.723 [2024-07-25 14:22:03.245159] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:33.723 [2024-07-25 14:22:03.245185] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.723 [2024-07-25 14:22:03.245213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.723 [2024-07-25 14:22:03.245237] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.723 [2024-07-25 14:22:03.245339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.723 [2024-07-25 14:22:03.245352] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.723 [2024-07-25 14:22:03.245359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.723 [2024-07-25 14:22:03.245379] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:33.723 [2024-07-25 14:22:03.245398] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:33.723 [2024-07-25 14:22:03.245411] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245419] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.723 [2024-07-25 14:22:03.245436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.723 [2024-07-25 14:22:03.245457] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.723 [2024-07-25 14:22:03.245540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.723 [2024-07-25 14:22:03.245554] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.723 [2024-07-25 14:22:03.245561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.723 [2024-07-25 14:22:03.245576] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:33.723 [2024-07-25 14:22:03.245591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:33.723 [2024-07-25 14:22:03.245603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245610] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.723 [2024-07-25 14:22:03.245617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.723 [2024-07-25 14:22:03.245628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.723 [2024-07-25 14:22:03.245649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.245726] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.245739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.245746] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.245753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.245762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:33.724 [2024-07-25 14:22:03.245778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.245787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.245794] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.245804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.724 [2024-07-25 14:22:03.245825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.245904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.245918] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.245925] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.245931] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.245940] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:33.724 [2024-07-25 14:22:03.245949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:33.724 [2024-07-25 14:22:03.245962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:33.724 [2024-07-25 14:22:03.246076] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:33.724 [2024-07-25 14:22:03.246088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:33.724 [2024-07-25 14:22:03.246103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.246128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.724 [2024-07-25 14:22:03.246149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.246229] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.246241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.246248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246254] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.246262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:33.724 [2024-07-25 14:22:03.246278] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246287] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.246304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.724 [2024-07-25 14:22:03.246324] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.246406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.246420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.246427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.246442] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:33.724 [2024-07-25 14:22:03.246450] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:33.724 [2024-07-25 14:22:03.246463] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:33.724 [2024-07-25 14:22:03.246478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:33.724 [2024-07-25 14:22:03.246495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.246514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.724 [2024-07-25 14:22:03.246535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.246662] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.724 [2024-07-25 14:22:03.246677] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.724 [2024-07-25 14:22:03.246684] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246695] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a3540): datao=0, datal=4096, cccid=0 00:20:33.724 [2024-07-25 14:22:03.246705] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9033c0) on tqpair(0x8a3540): expected_datao=0, payload_size=4096 00:20:33.724 [2024-07-25 14:22:03.246713] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246732] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.246742] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287151] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.287171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.287179] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.287199] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:33.724 [2024-07-25 14:22:03.287209] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:33.724 [2024-07-25 14:22:03.287217] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:33.724 [2024-07-25 14:22:03.287225] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:33.724 [2024-07-25 14:22:03.287234] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:33.724 [2024-07-25 14:22:03.287242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:33.724 [2024-07-25 14:22:03.287257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:33.724 [2024-07-25 14:22:03.287276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287291] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.287303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:33.724 [2024-07-25 14:22:03.287327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.724 [2024-07-25 14:22:03.287411] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.724 [2024-07-25 14:22:03.287424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.724 [2024-07-25 14:22:03.287430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:33.724 [2024-07-25 14:22:03.287450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287464] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.287474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.724 [2024-07-25 14:22:03.287484] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.287506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.724 [2024-07-25 14:22:03.287515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x8a3540) 00:20:33.724 [2024-07-25 14:22:03.287543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.724 [2024-07-25 14:22:03.287553] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.724 [2024-07-25 14:22:03.287566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.287575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.725 [2024-07-25 14:22:03.287584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:33.725 [2024-07-25 14:22:03.287618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:33.725 [2024-07-25 14:22:03.287631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.287638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.287649] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.725 [2024-07-25 14:22:03.287670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9033c0, cid 0, qid 0 00:20:33.725 [2024-07-25 14:22:03.287697] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903540, cid 1, qid 0 00:20:33.725 [2024-07-25 14:22:03.287705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9036c0, cid 2, qid 0 00:20:33.725 [2024-07-25 14:22:03.287713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:33.725 [2024-07-25 14:22:03.287720] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9039c0, cid 4, qid 0 00:20:33.725 [2024-07-25 14:22:03.287830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.725 [2024-07-25 14:22:03.287842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.725 [2024-07-25 14:22:03.287849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.287856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9039c0) on tqpair=0x8a3540 00:20:33.725 [2024-07-25 14:22:03.287866] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:33.725 [2024-07-25 14:22:03.287875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:33.725 [2024-07-25 14:22:03.287893] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.287902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.287913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.725 [2024-07-25 14:22:03.287933] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9039c0, cid 4, qid 0 00:20:33.725 [2024-07-25 14:22:03.288030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.725 [2024-07-25 14:22:03.288045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.725 [2024-07-25 14:22:03.288052] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288067] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a3540): datao=0, datal=4096, cccid=4 00:20:33.725 [2024-07-25 14:22:03.288076] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9039c0) on tqpair(0x8a3540): expected_datao=0, payload_size=4096 00:20:33.725 [2024-07-25 14:22:03.288088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288099] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288107] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.725 [2024-07-25 14:22:03.288129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.725 [2024-07-25 14:22:03.288135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9039c0) on tqpair=0x8a3540 00:20:33.725 [2024-07-25 14:22:03.288162] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:33.725 [2024-07-25 14:22:03.288200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.288223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.725 [2024-07-25 14:22:03.288234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.288257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.725 [2024-07-25 14:22:03.288284] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9039c0, cid 4, qid 0 00:20:33.725 [2024-07-25 14:22:03.288295] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903b40, cid 5, qid 0 00:20:33.725 [2024-07-25 14:22:03.288416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.725 [2024-07-25 14:22:03.288428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.725 [2024-07-25 14:22:03.288435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288442] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a3540): datao=0, datal=1024, cccid=4 00:20:33.725 [2024-07-25 14:22:03.288449] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9039c0) on tqpair(0x8a3540): expected_datao=0, payload_size=1024 00:20:33.725 [2024-07-25 14:22:03.288457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288466] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288474] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288483] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.725 [2024-07-25 14:22:03.288492] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.725 [2024-07-25 14:22:03.288498] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.288504] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903b40) on tqpair=0x8a3540 00:20:33.725 [2024-07-25 14:22:03.329134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.725 [2024-07-25 14:22:03.329153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.725 [2024-07-25 14:22:03.329161] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.329168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9039c0) on tqpair=0x8a3540 00:20:33.725 [2024-07-25 14:22:03.329186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.329196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.329208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.725 [2024-07-25 14:22:03.329238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9039c0, cid 4, qid 0 00:20:33.725 [2024-07-25 14:22:03.329342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.725 [2024-07-25 14:22:03.329355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.725 [2024-07-25 14:22:03.329362] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.329369] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a3540): datao=0, datal=3072, cccid=4 00:20:33.725 [2024-07-25 14:22:03.329376] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9039c0) on tqpair(0x8a3540): expected_datao=0, payload_size=3072 00:20:33.725 [2024-07-25 14:22:03.329384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.329404] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.329414] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.725 [2024-07-25 14:22:03.370160] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.725 [2024-07-25 14:22:03.370168] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370175] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9039c0) on tqpair=0x8a3540 00:20:33.725 [2024-07-25 14:22:03.370192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370201] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x8a3540) 00:20:33.725 [2024-07-25 14:22:03.370213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:33.725 [2024-07-25 14:22:03.370243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9039c0, cid 4, qid 0 00:20:33.725 [2024-07-25 14:22:03.370341] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:33.725 [2024-07-25 14:22:03.370353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:33.725 [2024-07-25 14:22:03.370360] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x8a3540): datao=0, datal=8, cccid=4 00:20:33.725 [2024-07-25 14:22:03.370383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9039c0) on tqpair(0x8a3540): expected_datao=0, payload_size=8 00:20:33.725 [2024-07-25 14:22:03.370391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370401] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:33.725 [2024-07-25 14:22:03.370409] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:33.998 [2024-07-25 14:22:03.414071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:33.999 [2024-07-25 14:22:03.414089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:33.999 [2024-07-25 14:22:03.414097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:33.999 [2024-07-25 14:22:03.414119] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9039c0) on tqpair=0x8a3540 00:20:33.999 ===================================================== 00:20:33.999 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:33.999 ===================================================== 00:20:33.999 Controller Capabilities/Features 00:20:33.999 ================================ 00:20:33.999 Vendor ID: 0000 00:20:33.999 Subsystem Vendor ID: 0000 00:20:33.999 Serial Number: .................... 00:20:33.999 Model Number: ........................................ 00:20:33.999 Firmware Version: 24.09 00:20:33.999 Recommended Arb Burst: 0 00:20:33.999 IEEE OUI Identifier: 00 00 00 00:20:33.999 Multi-path I/O 00:20:33.999 May have multiple subsystem ports: No 00:20:33.999 May have multiple controllers: No 00:20:33.999 Associated with SR-IOV VF: No 00:20:33.999 Max Data Transfer Size: 131072 00:20:33.999 Max Number of Namespaces: 0 00:20:33.999 Max Number of I/O Queues: 1024 00:20:33.999 NVMe Specification Version (VS): 1.3 00:20:33.999 NVMe Specification Version (Identify): 1.3 00:20:33.999 Maximum Queue Entries: 128 00:20:33.999 Contiguous Queues Required: Yes 00:20:33.999 Arbitration Mechanisms Supported 00:20:33.999 Weighted Round Robin: Not Supported 00:20:33.999 Vendor Specific: Not Supported 00:20:33.999 Reset Timeout: 15000 ms 00:20:33.999 Doorbell Stride: 4 bytes 00:20:33.999 NVM Subsystem Reset: Not Supported 00:20:33.999 Command Sets Supported 00:20:33.999 NVM Command Set: Supported 00:20:33.999 Boot Partition: Not Supported 00:20:33.999 Memory Page Size Minimum: 4096 bytes 00:20:33.999 Memory Page Size Maximum: 4096 bytes 00:20:33.999 Persistent Memory Region: Not Supported 00:20:33.999 Optional Asynchronous Events Supported 00:20:33.999 Namespace Attribute Notices: Not Supported 00:20:33.999 Firmware Activation Notices: Not Supported 00:20:33.999 ANA Change Notices: Not Supported 00:20:33.999 PLE Aggregate Log Change Notices: Not Supported 00:20:33.999 LBA Status Info Alert Notices: Not Supported 00:20:33.999 EGE Aggregate Log Change Notices: Not Supported 00:20:33.999 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.999 Zone Descriptor Change Notices: Not Supported 00:20:33.999 Discovery Log Change Notices: Supported 00:20:33.999 Controller Attributes 00:20:33.999 128-bit Host Identifier: Not Supported 00:20:33.999 Non-Operational Permissive Mode: Not Supported 00:20:33.999 NVM Sets: Not Supported 00:20:33.999 Read Recovery Levels: Not Supported 00:20:33.999 Endurance Groups: Not Supported 00:20:33.999 Predictable Latency Mode: Not Supported 00:20:33.999 Traffic Based Keep ALive: Not Supported 00:20:33.999 Namespace Granularity: Not Supported 00:20:33.999 SQ Associations: Not Supported 00:20:33.999 UUID List: Not Supported 00:20:33.999 Multi-Domain Subsystem: Not Supported 00:20:33.999 Fixed Capacity Management: Not Supported 00:20:33.999 Variable Capacity Management: Not Supported 00:20:33.999 Delete Endurance Group: Not Supported 00:20:33.999 Delete NVM Set: Not Supported 00:20:33.999 Extended LBA Formats Supported: Not Supported 00:20:33.999 Flexible Data Placement Supported: Not Supported 00:20:33.999 00:20:33.999 Controller Memory Buffer Support 00:20:33.999 ================================ 00:20:33.999 Supported: No 00:20:33.999 00:20:33.999 Persistent Memory Region Support 00:20:33.999 ================================ 00:20:33.999 Supported: No 00:20:33.999 00:20:33.999 Admin Command Set Attributes 00:20:33.999 ============================ 00:20:33.999 Security Send/Receive: Not Supported 00:20:33.999 Format NVM: Not Supported 00:20:33.999 Firmware Activate/Download: Not Supported 00:20:33.999 Namespace Management: Not Supported 00:20:33.999 Device Self-Test: Not Supported 00:20:33.999 Directives: Not Supported 00:20:33.999 NVMe-MI: Not Supported 00:20:33.999 Virtualization Management: Not Supported 00:20:33.999 Doorbell Buffer Config: Not Supported 00:20:33.999 Get LBA Status Capability: Not Supported 00:20:33.999 Command & Feature Lockdown Capability: Not Supported 00:20:33.999 Abort Command Limit: 1 00:20:33.999 Async Event Request Limit: 4 00:20:33.999 Number of Firmware Slots: N/A 00:20:33.999 Firmware Slot 1 Read-Only: N/A 00:20:33.999 Firmware Activation Without Reset: N/A 00:20:33.999 Multiple Update Detection Support: N/A 00:20:33.999 Firmware Update Granularity: No Information Provided 00:20:33.999 Per-Namespace SMART Log: No 00:20:33.999 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.999 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:33.999 Command Effects Log Page: Not Supported 00:20:33.999 Get Log Page Extended Data: Supported 00:20:33.999 Telemetry Log Pages: Not Supported 00:20:33.999 Persistent Event Log Pages: Not Supported 00:20:33.999 Supported Log Pages Log Page: May Support 00:20:33.999 Commands Supported & Effects Log Page: Not Supported 00:20:33.999 Feature Identifiers & Effects Log Page:May Support 00:20:33.999 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.999 Data Area 4 for Telemetry Log: Not Supported 00:20:33.999 Error Log Page Entries Supported: 128 00:20:33.999 Keep Alive: Not Supported 00:20:33.999 00:20:33.999 NVM Command Set Attributes 00:20:33.999 ========================== 00:20:33.999 Submission Queue Entry Size 00:20:33.999 Max: 1 00:20:33.999 Min: 1 00:20:33.999 Completion Queue Entry Size 00:20:33.999 Max: 1 00:20:33.999 Min: 1 00:20:33.999 Number of Namespaces: 0 00:20:33.999 Compare Command: Not Supported 00:20:33.999 Write Uncorrectable Command: Not Supported 00:20:33.999 Dataset Management Command: Not Supported 00:20:33.999 Write Zeroes Command: Not Supported 00:20:33.999 Set Features Save Field: Not Supported 00:20:33.999 Reservations: Not Supported 00:20:33.999 Timestamp: Not Supported 00:20:33.999 Copy: Not Supported 00:20:33.999 Volatile Write Cache: Not Present 00:20:33.999 Atomic Write Unit (Normal): 1 00:20:33.999 Atomic Write Unit (PFail): 1 00:20:33.999 Atomic Compare & Write Unit: 1 00:20:33.999 Fused Compare & Write: Supported 00:20:33.999 Scatter-Gather List 00:20:33.999 SGL Command Set: Supported 00:20:33.999 SGL Keyed: Supported 00:20:33.999 SGL Bit Bucket Descriptor: Not Supported 00:20:33.999 SGL Metadata Pointer: Not Supported 00:20:33.999 Oversized SGL: Not Supported 00:20:33.999 SGL Metadata Address: Not Supported 00:20:33.999 SGL Offset: Supported 00:20:33.999 Transport SGL Data Block: Not Supported 00:20:33.999 Replay Protected Memory Block: Not Supported 00:20:33.999 00:20:33.999 Firmware Slot Information 00:20:33.999 ========================= 00:20:33.999 Active slot: 0 00:20:33.999 00:20:33.999 00:20:33.999 Error Log 00:20:33.999 ========= 00:20:33.999 00:20:33.999 Active Namespaces 00:20:33.999 ================= 00:20:33.999 Discovery Log Page 00:20:33.999 ================== 00:20:33.999 Generation Counter: 2 00:20:33.999 Number of Records: 2 00:20:33.999 Record Format: 0 00:20:33.999 00:20:33.999 Discovery Log Entry 0 00:20:33.999 ---------------------- 00:20:33.999 Transport Type: 3 (TCP) 00:20:33.999 Address Family: 1 (IPv4) 00:20:33.999 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:33.999 Entry Flags: 00:20:33.999 Duplicate Returned Information: 1 00:20:33.999 Explicit Persistent Connection Support for Discovery: 1 00:20:33.999 Transport Requirements: 00:20:33.999 Secure Channel: Not Required 00:20:33.999 Port ID: 0 (0x0000) 00:20:33.999 Controller ID: 65535 (0xffff) 00:20:33.999 Admin Max SQ Size: 128 00:20:33.999 Transport Service Identifier: 4420 00:20:33.999 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:33.999 Transport Address: 10.0.0.2 00:20:33.999 Discovery Log Entry 1 00:20:33.999 ---------------------- 00:20:34.000 Transport Type: 3 (TCP) 00:20:34.000 Address Family: 1 (IPv4) 00:20:34.000 Subsystem Type: 2 (NVM Subsystem) 00:20:34.000 Entry Flags: 00:20:34.000 Duplicate Returned Information: 0 00:20:34.000 Explicit Persistent Connection Support for Discovery: 0 00:20:34.000 Transport Requirements: 00:20:34.000 Secure Channel: Not Required 00:20:34.000 Port ID: 0 (0x0000) 00:20:34.000 Controller ID: 65535 (0xffff) 00:20:34.000 Admin Max SQ Size: 128 00:20:34.000 Transport Service Identifier: 4420 00:20:34.000 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:34.000 Transport Address: 10.0.0.2 [2024-07-25 14:22:03.414240] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:34.000 [2024-07-25 14:22:03.414263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9033c0) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.000 [2024-07-25 14:22:03.414285] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903540) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.000 [2024-07-25 14:22:03.414301] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9036c0) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.000 [2024-07-25 14:22:03.414320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.000 [2024-07-25 14:22:03.414347] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.414374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.414400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.414474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.414486] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.414493] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414520] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414526] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.414536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.414563] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.414660] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.414674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.414681] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414688] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414697] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:34.000 [2024-07-25 14:22:03.414706] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:34.000 [2024-07-25 14:22:03.414721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414737] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.414748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.414769] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.414846] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.414860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.414867] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414874] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.414891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.414907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.414918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.414938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.415015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.415028] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.415034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.415057] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415083] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.415094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.415115] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.415209] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.415221] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.415228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.415250] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415259] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.415277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.415297] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.415373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.415385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.415392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415399] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.415414] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415423] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415430] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.415440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.415460] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.415557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.415571] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.415578] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.415601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.000 [2024-07-25 14:22:03.415628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.000 [2024-07-25 14:22:03.415648] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.000 [2024-07-25 14:22:03.415721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.000 [2024-07-25 14:22:03.415736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.000 [2024-07-25 14:22:03.415744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.000 [2024-07-25 14:22:03.415766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415776] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.000 [2024-07-25 14:22:03.415783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.415793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.415813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.415886] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.415897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.415904] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.415911] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.415926] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.415935] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.415942] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.415952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.415972] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416066] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416075] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416081] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416107] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416222] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416265] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416275] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416281] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416417] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416488] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416563] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416575] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416582] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416589] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416604] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416613] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.416898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.416912] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.416919] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416925] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.416940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.416956] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.416966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.416987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.417071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.417084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.417091] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417102] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.417119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417128] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417135] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.417145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.417166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.417240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.417252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.417259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.417281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417290] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.417307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.417327] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.417399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.417411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.417418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.001 [2024-07-25 14:22:03.417440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.001 [2024-07-25 14:22:03.417466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.001 [2024-07-25 14:22:03.417486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.001 [2024-07-25 14:22:03.417564] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.001 [2024-07-25 14:22:03.417577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.001 [2024-07-25 14:22:03.417583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.001 [2024-07-25 14:22:03.417590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.417606] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417615] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417622] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.417632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.417652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.417724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.417736] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.417743] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417750] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.417769] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417779] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.417796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.417817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.417887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.417899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.417906] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417913] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.417928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417938] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.417944] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.417955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.417975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418056] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418084] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418091] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418117] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418243] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418272] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418282] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418404] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418433] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418442] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418569] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418582] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418598] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418607] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418614] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418644] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.418883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.418896] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.418902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418909] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.418924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418934] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.418940] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.418951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.418971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.419047] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.423069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.423083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.423090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.423109] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.423119] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.423125] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x8a3540) 00:20:34.002 [2024-07-25 14:22:03.423142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.002 [2024-07-25 14:22:03.423166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x903840, cid 3, qid 0 00:20:34.002 [2024-07-25 14:22:03.423261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.002 [2024-07-25 14:22:03.423274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.002 [2024-07-25 14:22:03.423281] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.002 [2024-07-25 14:22:03.423288] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x903840) on tqpair=0x8a3540 00:20:34.002 [2024-07-25 14:22:03.423301] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:20:34.002 00:20:34.002 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:34.002 [2024-07-25 14:22:03.455723] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:34.003 [2024-07-25 14:22:03.455761] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969502 ] 00:20:34.003 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.003 [2024-07-25 14:22:03.489838] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:34.003 [2024-07-25 14:22:03.489883] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:34.003 [2024-07-25 14:22:03.489893] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:34.003 [2024-07-25 14:22:03.489906] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:34.003 [2024-07-25 14:22:03.489918] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:34.003 [2024-07-25 14:22:03.490148] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:34.003 [2024-07-25 14:22:03.490185] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f72540 0 00:20:34.003 [2024-07-25 14:22:03.497075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:34.003 [2024-07-25 14:22:03.497097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:34.003 [2024-07-25 14:22:03.497106] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:34.003 [2024-07-25 14:22:03.497112] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:34.003 [2024-07-25 14:22:03.497165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.497177] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.497184] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.497198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:34.003 [2024-07-25 14:22:03.497224] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.505071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.505089] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.505096] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505103] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.505123] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:34.003 [2024-07-25 14:22:03.505150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:34.003 [2024-07-25 14:22:03.505160] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:34.003 [2024-07-25 14:22:03.505178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.505206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.505229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.505352] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.505367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.505374] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.505393] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:34.003 [2024-07-25 14:22:03.505408] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:34.003 [2024-07-25 14:22:03.505421] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505435] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.505445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.505467] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.505552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.505564] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.505571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.505586] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:34.003 [2024-07-25 14:22:03.505600] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.505613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505627] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.505638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.505659] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.505742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.505755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.505762] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505769] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.505778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.505798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.505826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.505847] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.505925] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.505939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.505946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.505953] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.505960] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:34.003 [2024-07-25 14:22:03.505969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.505982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.506092] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:34.003 [2024-07-25 14:22:03.506101] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.506114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.506138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.506160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.506268] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.506281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.506287] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.003 [2024-07-25 14:22:03.506302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:34.003 [2024-07-25 14:22:03.506319] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.003 [2024-07-25 14:22:03.506345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.003 [2024-07-25 14:22:03.506365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.003 [2024-07-25 14:22:03.506442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.003 [2024-07-25 14:22:03.506456] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.003 [2024-07-25 14:22:03.506463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.003 [2024-07-25 14:22:03.506469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.004 [2024-07-25 14:22:03.506480] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:34.004 [2024-07-25 14:22:03.506489] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.506503] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:34.004 [2024-07-25 14:22:03.506520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.506535] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.506543] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.506554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.004 [2024-07-25 14:22:03.506575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.004 [2024-07-25 14:22:03.506684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.004 [2024-07-25 14:22:03.506699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.004 [2024-07-25 14:22:03.506706] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.506713] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=4096, cccid=0 00:20:34.004 [2024-07-25 14:22:03.506721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd23c0) on tqpair(0x1f72540): expected_datao=0, payload_size=4096 00:20:34.004 [2024-07-25 14:22:03.506728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.506746] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.506755] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551074] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.004 [2024-07-25 14:22:03.551092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.004 [2024-07-25 14:22:03.551099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551106] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.004 [2024-07-25 14:22:03.551117] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:34.004 [2024-07-25 14:22:03.551126] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:34.004 [2024-07-25 14:22:03.551133] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:34.004 [2024-07-25 14:22:03.551140] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:34.004 [2024-07-25 14:22:03.551147] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:34.004 [2024-07-25 14:22:03.551155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551201] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551211] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551218] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:34.004 [2024-07-25 14:22:03.551253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.004 [2024-07-25 14:22:03.551371] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.004 [2024-07-25 14:22:03.551384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.004 [2024-07-25 14:22:03.551391] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.004 [2024-07-25 14:22:03.551408] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.004 [2024-07-25 14:22:03.551443] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.004 [2024-07-25 14:22:03.551475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.004 [2024-07-25 14:22:03.551507] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551520] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.004 [2024-07-25 14:22:03.551538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551557] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551603] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.004 [2024-07-25 14:22:03.551625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd23c0, cid 0, qid 0 00:20:34.004 [2024-07-25 14:22:03.551652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2540, cid 1, qid 0 00:20:34.004 [2024-07-25 14:22:03.551660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd26c0, cid 2, qid 0 00:20:34.004 [2024-07-25 14:22:03.551668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.004 [2024-07-25 14:22:03.551675] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.004 [2024-07-25 14:22:03.551805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.004 [2024-07-25 14:22:03.551817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.004 [2024-07-25 14:22:03.551824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.004 [2024-07-25 14:22:03.551839] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:34.004 [2024-07-25 14:22:03.551851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551882] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:34.004 [2024-07-25 14:22:03.551894] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551901] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.004 [2024-07-25 14:22:03.551908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.004 [2024-07-25 14:22:03.551919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:34.004 [2024-07-25 14:22:03.551940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.005 [2024-07-25 14:22:03.552133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.005 [2024-07-25 14:22:03.552149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.005 [2024-07-25 14:22:03.552156] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.552163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.005 [2024-07-25 14:22:03.552232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.552252] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.552268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.552276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.005 [2024-07-25 14:22:03.552286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.005 [2024-07-25 14:22:03.552308] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.005 [2024-07-25 14:22:03.552438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.005 [2024-07-25 14:22:03.552452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.005 [2024-07-25 14:22:03.552458] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.552465] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=4096, cccid=4 00:20:34.005 [2024-07-25 14:22:03.552473] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd29c0) on tqpair(0x1f72540): expected_datao=0, payload_size=4096 00:20:34.005 [2024-07-25 14:22:03.552480] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.552490] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.552498] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593154] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.005 [2024-07-25 14:22:03.593173] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.005 [2024-07-25 14:22:03.593180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.005 [2024-07-25 14:22:03.593205] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:34.005 [2024-07-25 14:22:03.593223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.593245] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.593261] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.005 [2024-07-25 14:22:03.593280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.005 [2024-07-25 14:22:03.593303] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.005 [2024-07-25 14:22:03.593406] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.005 [2024-07-25 14:22:03.593421] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.005 [2024-07-25 14:22:03.593428] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593434] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=4096, cccid=4 00:20:34.005 [2024-07-25 14:22:03.593442] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd29c0) on tqpair(0x1f72540): expected_datao=0, payload_size=4096 00:20:34.005 [2024-07-25 14:22:03.593450] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593467] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.593477] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636071] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.005 [2024-07-25 14:22:03.636092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.005 [2024-07-25 14:22:03.636099] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.005 [2024-07-25 14:22:03.636132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.636153] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:34.005 [2024-07-25 14:22:03.636169] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636177] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.005 [2024-07-25 14:22:03.636189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.005 [2024-07-25 14:22:03.636213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.005 [2024-07-25 14:22:03.636339] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.005 [2024-07-25 14:22:03.636362] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.005 [2024-07-25 14:22:03.636369] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636376] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=4096, cccid=4 00:20:34.005 [2024-07-25 14:22:03.636383] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd29c0) on tqpair(0x1f72540): expected_datao=0, payload_size=4096 00:20:34.005 [2024-07-25 14:22:03.636391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636408] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.005 [2024-07-25 14:22:03.636418] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677172] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.277 [2024-07-25 14:22:03.677191] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.277 [2024-07-25 14:22:03.677198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677205] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.277 [2024-07-25 14:22:03.677224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677270] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677280] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677289] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677298] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:34.277 [2024-07-25 14:22:03.677306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:34.277 [2024-07-25 14:22:03.677315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:34.277 [2024-07-25 14:22:03.677335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.277 [2024-07-25 14:22:03.677355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.277 [2024-07-25 14:22:03.677374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f72540) 00:20:34.277 [2024-07-25 14:22:03.677397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:34.277 [2024-07-25 14:22:03.677424] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.277 [2024-07-25 14:22:03.677436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2b40, cid 5, qid 0 00:20:34.277 [2024-07-25 14:22:03.677529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.277 [2024-07-25 14:22:03.677541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.277 [2024-07-25 14:22:03.677548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.277 [2024-07-25 14:22:03.677565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.277 [2024-07-25 14:22:03.677574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.277 [2024-07-25 14:22:03.677581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677587] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2b40) on tqpair=0x1f72540 00:20:34.277 [2024-07-25 14:22:03.677603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677612] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f72540) 00:20:34.277 [2024-07-25 14:22:03.677623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.277 [2024-07-25 14:22:03.677643] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2b40, cid 5, qid 0 00:20:34.277 [2024-07-25 14:22:03.677720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.277 [2024-07-25 14:22:03.677733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.277 [2024-07-25 14:22:03.677744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677752] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2b40) on tqpair=0x1f72540 00:20:34.277 [2024-07-25 14:22:03.677768] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.277 [2024-07-25 14:22:03.677777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f72540) 00:20:34.277 [2024-07-25 14:22:03.677787] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.277 [2024-07-25 14:22:03.677807] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2b40, cid 5, qid 0 00:20:34.277 [2024-07-25 14:22:03.677887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.277 [2024-07-25 14:22:03.677900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.277 [2024-07-25 14:22:03.677907] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.677914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2b40) on tqpair=0x1f72540 00:20:34.278 [2024-07-25 14:22:03.677929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.677939] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f72540) 00:20:34.278 [2024-07-25 14:22:03.677949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.278 [2024-07-25 14:22:03.677969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2b40, cid 5, qid 0 00:20:34.278 [2024-07-25 14:22:03.678075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.278 [2024-07-25 14:22:03.678090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.278 [2024-07-25 14:22:03.678097] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2b40) on tqpair=0x1f72540 00:20:34.278 [2024-07-25 14:22:03.678129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678139] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f72540) 00:20:34.278 [2024-07-25 14:22:03.678150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.278 [2024-07-25 14:22:03.678163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f72540) 00:20:34.278 [2024-07-25 14:22:03.678180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.278 [2024-07-25 14:22:03.678192] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f72540) 00:20:34.278 [2024-07-25 14:22:03.678209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.278 [2024-07-25 14:22:03.678221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678229] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f72540) 00:20:34.278 [2024-07-25 14:22:03.678238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.278 [2024-07-25 14:22:03.678261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2b40, cid 5, qid 0 00:20:34.278 [2024-07-25 14:22:03.678272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd29c0, cid 4, qid 0 00:20:34.278 [2024-07-25 14:22:03.678280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2cc0, cid 6, qid 0 00:20:34.278 [2024-07-25 14:22:03.678291] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2e40, cid 7, qid 0 00:20:34.278 [2024-07-25 14:22:03.678452] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.278 [2024-07-25 14:22:03.678465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.278 [2024-07-25 14:22:03.678472] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678478] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=8192, cccid=5 00:20:34.278 [2024-07-25 14:22:03.678486] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd2b40) on tqpair(0x1f72540): expected_datao=0, payload_size=8192 00:20:34.278 [2024-07-25 14:22:03.678494] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678512] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678521] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.278 [2024-07-25 14:22:03.678544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.278 [2024-07-25 14:22:03.678551] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678557] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=512, cccid=4 00:20:34.278 [2024-07-25 14:22:03.678565] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd29c0) on tqpair(0x1f72540): expected_datao=0, payload_size=512 00:20:34.278 [2024-07-25 14:22:03.678572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678582] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678589] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678597] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.278 [2024-07-25 14:22:03.678606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.278 [2024-07-25 14:22:03.678612] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678619] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=512, cccid=6 00:20:34.278 [2024-07-25 14:22:03.678626] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd2cc0) on tqpair(0x1f72540): expected_datao=0, payload_size=512 00:20:34.278 [2024-07-25 14:22:03.678634] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678643] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678650] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:34.278 [2024-07-25 14:22:03.678667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:34.278 [2024-07-25 14:22:03.678674] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678680] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f72540): datao=0, datal=4096, cccid=7 00:20:34.278 [2024-07-25 14:22:03.678687] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1fd2e40) on tqpair(0x1f72540): expected_datao=0, payload_size=4096 00:20:34.278 [2024-07-25 14:22:03.678695] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678704] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678711] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678719] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.278 [2024-07-25 14:22:03.678728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.278 [2024-07-25 14:22:03.678734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2b40) on tqpair=0x1f72540 00:20:34.278 [2024-07-25 14:22:03.678775] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.278 [2024-07-25 14:22:03.678789] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.278 [2024-07-25 14:22:03.678796] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678803] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd29c0) on tqpair=0x1f72540 00:20:34.278 [2024-07-25 14:22:03.678832] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.278 [2024-07-25 14:22:03.678843] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.278 [2024-07-25 14:22:03.678849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2cc0) on tqpair=0x1f72540 00:20:34.278 [2024-07-25 14:22:03.678865] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.278 [2024-07-25 14:22:03.678874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.278 [2024-07-25 14:22:03.678880] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.278 [2024-07-25 14:22:03.678887] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2e40) on tqpair=0x1f72540 00:20:34.278 ===================================================== 00:20:34.278 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.278 ===================================================== 00:20:34.278 Controller Capabilities/Features 00:20:34.278 ================================ 00:20:34.278 Vendor ID: 8086 00:20:34.278 Subsystem Vendor ID: 8086 00:20:34.278 Serial Number: SPDK00000000000001 00:20:34.278 Model Number: SPDK bdev Controller 00:20:34.278 Firmware Version: 24.09 00:20:34.278 Recommended Arb Burst: 6 00:20:34.278 IEEE OUI Identifier: e4 d2 5c 00:20:34.278 Multi-path I/O 00:20:34.278 May have multiple subsystem ports: Yes 00:20:34.278 May have multiple controllers: Yes 00:20:34.278 Associated with SR-IOV VF: No 00:20:34.278 Max Data Transfer Size: 131072 00:20:34.278 Max Number of Namespaces: 32 00:20:34.278 Max Number of I/O Queues: 127 00:20:34.278 NVMe Specification Version (VS): 1.3 00:20:34.278 NVMe Specification Version (Identify): 1.3 00:20:34.278 Maximum Queue Entries: 128 00:20:34.278 Contiguous Queues Required: Yes 00:20:34.278 Arbitration Mechanisms Supported 00:20:34.278 Weighted Round Robin: Not Supported 00:20:34.278 Vendor Specific: Not Supported 00:20:34.278 Reset Timeout: 15000 ms 00:20:34.278 Doorbell Stride: 4 bytes 00:20:34.278 NVM Subsystem Reset: Not Supported 00:20:34.278 Command Sets Supported 00:20:34.278 NVM Command Set: Supported 00:20:34.278 Boot Partition: Not Supported 00:20:34.278 Memory Page Size Minimum: 4096 bytes 00:20:34.278 Memory Page Size Maximum: 4096 bytes 00:20:34.278 Persistent Memory Region: Not Supported 00:20:34.278 Optional Asynchronous Events Supported 00:20:34.278 Namespace Attribute Notices: Supported 00:20:34.279 Firmware Activation Notices: Not Supported 00:20:34.279 ANA Change Notices: Not Supported 00:20:34.279 PLE Aggregate Log Change Notices: Not Supported 00:20:34.279 LBA Status Info Alert Notices: Not Supported 00:20:34.279 EGE Aggregate Log Change Notices: Not Supported 00:20:34.279 Normal NVM Subsystem Shutdown event: Not Supported 00:20:34.279 Zone Descriptor Change Notices: Not Supported 00:20:34.279 Discovery Log Change Notices: Not Supported 00:20:34.279 Controller Attributes 00:20:34.279 128-bit Host Identifier: Supported 00:20:34.279 Non-Operational Permissive Mode: Not Supported 00:20:34.279 NVM Sets: Not Supported 00:20:34.279 Read Recovery Levels: Not Supported 00:20:34.279 Endurance Groups: Not Supported 00:20:34.279 Predictable Latency Mode: Not Supported 00:20:34.279 Traffic Based Keep ALive: Not Supported 00:20:34.279 Namespace Granularity: Not Supported 00:20:34.279 SQ Associations: Not Supported 00:20:34.279 UUID List: Not Supported 00:20:34.279 Multi-Domain Subsystem: Not Supported 00:20:34.279 Fixed Capacity Management: Not Supported 00:20:34.279 Variable Capacity Management: Not Supported 00:20:34.279 Delete Endurance Group: Not Supported 00:20:34.279 Delete NVM Set: Not Supported 00:20:34.279 Extended LBA Formats Supported: Not Supported 00:20:34.279 Flexible Data Placement Supported: Not Supported 00:20:34.279 00:20:34.279 Controller Memory Buffer Support 00:20:34.279 ================================ 00:20:34.279 Supported: No 00:20:34.279 00:20:34.279 Persistent Memory Region Support 00:20:34.279 ================================ 00:20:34.279 Supported: No 00:20:34.279 00:20:34.279 Admin Command Set Attributes 00:20:34.279 ============================ 00:20:34.279 Security Send/Receive: Not Supported 00:20:34.279 Format NVM: Not Supported 00:20:34.279 Firmware Activate/Download: Not Supported 00:20:34.279 Namespace Management: Not Supported 00:20:34.279 Device Self-Test: Not Supported 00:20:34.279 Directives: Not Supported 00:20:34.279 NVMe-MI: Not Supported 00:20:34.279 Virtualization Management: Not Supported 00:20:34.279 Doorbell Buffer Config: Not Supported 00:20:34.279 Get LBA Status Capability: Not Supported 00:20:34.279 Command & Feature Lockdown Capability: Not Supported 00:20:34.279 Abort Command Limit: 4 00:20:34.279 Async Event Request Limit: 4 00:20:34.279 Number of Firmware Slots: N/A 00:20:34.279 Firmware Slot 1 Read-Only: N/A 00:20:34.279 Firmware Activation Without Reset: N/A 00:20:34.279 Multiple Update Detection Support: N/A 00:20:34.279 Firmware Update Granularity: No Information Provided 00:20:34.279 Per-Namespace SMART Log: No 00:20:34.279 Asymmetric Namespace Access Log Page: Not Supported 00:20:34.279 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:34.279 Command Effects Log Page: Supported 00:20:34.279 Get Log Page Extended Data: Supported 00:20:34.279 Telemetry Log Pages: Not Supported 00:20:34.279 Persistent Event Log Pages: Not Supported 00:20:34.279 Supported Log Pages Log Page: May Support 00:20:34.279 Commands Supported & Effects Log Page: Not Supported 00:20:34.279 Feature Identifiers & Effects Log Page:May Support 00:20:34.279 NVMe-MI Commands & Effects Log Page: May Support 00:20:34.279 Data Area 4 for Telemetry Log: Not Supported 00:20:34.279 Error Log Page Entries Supported: 128 00:20:34.279 Keep Alive: Supported 00:20:34.279 Keep Alive Granularity: 10000 ms 00:20:34.279 00:20:34.279 NVM Command Set Attributes 00:20:34.279 ========================== 00:20:34.279 Submission Queue Entry Size 00:20:34.279 Max: 64 00:20:34.279 Min: 64 00:20:34.279 Completion Queue Entry Size 00:20:34.279 Max: 16 00:20:34.279 Min: 16 00:20:34.279 Number of Namespaces: 32 00:20:34.279 Compare Command: Supported 00:20:34.279 Write Uncorrectable Command: Not Supported 00:20:34.279 Dataset Management Command: Supported 00:20:34.279 Write Zeroes Command: Supported 00:20:34.279 Set Features Save Field: Not Supported 00:20:34.279 Reservations: Supported 00:20:34.279 Timestamp: Not Supported 00:20:34.279 Copy: Supported 00:20:34.279 Volatile Write Cache: Present 00:20:34.279 Atomic Write Unit (Normal): 1 00:20:34.279 Atomic Write Unit (PFail): 1 00:20:34.279 Atomic Compare & Write Unit: 1 00:20:34.279 Fused Compare & Write: Supported 00:20:34.279 Scatter-Gather List 00:20:34.279 SGL Command Set: Supported 00:20:34.279 SGL Keyed: Supported 00:20:34.279 SGL Bit Bucket Descriptor: Not Supported 00:20:34.279 SGL Metadata Pointer: Not Supported 00:20:34.279 Oversized SGL: Not Supported 00:20:34.279 SGL Metadata Address: Not Supported 00:20:34.279 SGL Offset: Supported 00:20:34.279 Transport SGL Data Block: Not Supported 00:20:34.279 Replay Protected Memory Block: Not Supported 00:20:34.279 00:20:34.279 Firmware Slot Information 00:20:34.279 ========================= 00:20:34.279 Active slot: 1 00:20:34.279 Slot 1 Firmware Revision: 24.09 00:20:34.279 00:20:34.279 00:20:34.279 Commands Supported and Effects 00:20:34.279 ============================== 00:20:34.279 Admin Commands 00:20:34.279 -------------- 00:20:34.279 Get Log Page (02h): Supported 00:20:34.279 Identify (06h): Supported 00:20:34.279 Abort (08h): Supported 00:20:34.279 Set Features (09h): Supported 00:20:34.279 Get Features (0Ah): Supported 00:20:34.279 Asynchronous Event Request (0Ch): Supported 00:20:34.279 Keep Alive (18h): Supported 00:20:34.279 I/O Commands 00:20:34.279 ------------ 00:20:34.279 Flush (00h): Supported LBA-Change 00:20:34.279 Write (01h): Supported LBA-Change 00:20:34.279 Read (02h): Supported 00:20:34.279 Compare (05h): Supported 00:20:34.279 Write Zeroes (08h): Supported LBA-Change 00:20:34.279 Dataset Management (09h): Supported LBA-Change 00:20:34.279 Copy (19h): Supported LBA-Change 00:20:34.279 00:20:34.279 Error Log 00:20:34.279 ========= 00:20:34.279 00:20:34.279 Arbitration 00:20:34.279 =========== 00:20:34.279 Arbitration Burst: 1 00:20:34.279 00:20:34.279 Power Management 00:20:34.279 ================ 00:20:34.279 Number of Power States: 1 00:20:34.279 Current Power State: Power State #0 00:20:34.279 Power State #0: 00:20:34.279 Max Power: 0.00 W 00:20:34.279 Non-Operational State: Operational 00:20:34.279 Entry Latency: Not Reported 00:20:34.279 Exit Latency: Not Reported 00:20:34.279 Relative Read Throughput: 0 00:20:34.279 Relative Read Latency: 0 00:20:34.279 Relative Write Throughput: 0 00:20:34.279 Relative Write Latency: 0 00:20:34.279 Idle Power: Not Reported 00:20:34.279 Active Power: Not Reported 00:20:34.279 Non-Operational Permissive Mode: Not Supported 00:20:34.279 00:20:34.279 Health Information 00:20:34.279 ================== 00:20:34.279 Critical Warnings: 00:20:34.279 Available Spare Space: OK 00:20:34.279 Temperature: OK 00:20:34.279 Device Reliability: OK 00:20:34.279 Read Only: No 00:20:34.279 Volatile Memory Backup: OK 00:20:34.279 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:34.279 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:34.279 Available Spare: 0% 00:20:34.279 Available Spare Threshold: 0% 00:20:34.279 Life Percentage Used:[2024-07-25 14:22:03.679011] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.279 [2024-07-25 14:22:03.679024] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f72540) 00:20:34.279 [2024-07-25 14:22:03.679034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.279 [2024-07-25 14:22:03.679080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2e40, cid 7, qid 0 00:20:34.279 [2024-07-25 14:22:03.679188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.279 [2024-07-25 14:22:03.679201] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.279 [2024-07-25 14:22:03.679207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.279 [2024-07-25 14:22:03.679214] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2e40) on tqpair=0x1f72540 00:20:34.279 [2024-07-25 14:22:03.679259] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:34.279 [2024-07-25 14:22:03.679278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd23c0) on tqpair=0x1f72540 00:20:34.279 [2024-07-25 14:22:03.679289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.280 [2024-07-25 14:22:03.679298] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2540) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.280 [2024-07-25 14:22:03.679314] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd26c0) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.280 [2024-07-25 14:22:03.679330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:34.280 [2024-07-25 14:22:03.679351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679380] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.280 [2024-07-25 14:22:03.679391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.280 [2024-07-25 14:22:03.679413] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.280 [2024-07-25 14:22:03.679534] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.280 [2024-07-25 14:22:03.679546] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.280 [2024-07-25 14:22:03.679557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679575] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679583] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679590] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.280 [2024-07-25 14:22:03.679600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.280 [2024-07-25 14:22:03.679626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.280 [2024-07-25 14:22:03.679724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.280 [2024-07-25 14:22:03.679738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.280 [2024-07-25 14:22:03.679744] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679759] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:34.280 [2024-07-25 14:22:03.679767] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:34.280 [2024-07-25 14:22:03.679783] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.280 [2024-07-25 14:22:03.679809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.280 [2024-07-25 14:22:03.679830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.280 [2024-07-25 14:22:03.679908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.280 [2024-07-25 14:22:03.679922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.280 [2024-07-25 14:22:03.679928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.679951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679960] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.679966] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.280 [2024-07-25 14:22:03.679977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.280 [2024-07-25 14:22:03.679997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.280 [2024-07-25 14:22:03.684079] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.280 [2024-07-25 14:22:03.684106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.280 [2024-07-25 14:22:03.684114] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.684120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.684138] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.684148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.684155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f72540) 00:20:34.280 [2024-07-25 14:22:03.684166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:34.280 [2024-07-25 14:22:03.684188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1fd2840, cid 3, qid 0 00:20:34.280 [2024-07-25 14:22:03.684294] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:34.280 [2024-07-25 14:22:03.684307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:34.280 [2024-07-25 14:22:03.684314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:34.280 [2024-07-25 14:22:03.684320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1fd2840) on tqpair=0x1f72540 00:20:34.280 [2024-07-25 14:22:03.684333] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:34.280 0% 00:20:34.280 Data Units Read: 0 00:20:34.280 Data Units Written: 0 00:20:34.280 Host Read Commands: 0 00:20:34.280 Host Write Commands: 0 00:20:34.280 Controller Busy Time: 0 minutes 00:20:34.280 Power Cycles: 0 00:20:34.280 Power On Hours: 0 hours 00:20:34.280 Unsafe Shutdowns: 0 00:20:34.280 Unrecoverable Media Errors: 0 00:20:34.280 Lifetime Error Log Entries: 0 00:20:34.280 Warning Temperature Time: 0 minutes 00:20:34.280 Critical Temperature Time: 0 minutes 00:20:34.280 00:20:34.280 Number of Queues 00:20:34.280 ================ 00:20:34.280 Number of I/O Submission Queues: 127 00:20:34.280 Number of I/O Completion Queues: 127 00:20:34.280 00:20:34.280 Active Namespaces 00:20:34.280 ================= 00:20:34.280 Namespace ID:1 00:20:34.280 Error Recovery Timeout: Unlimited 00:20:34.280 Command Set Identifier: NVM (00h) 00:20:34.280 Deallocate: Supported 00:20:34.280 Deallocated/Unwritten Error: Not Supported 00:20:34.280 Deallocated Read Value: Unknown 00:20:34.280 Deallocate in Write Zeroes: Not Supported 00:20:34.280 Deallocated Guard Field: 0xFFFF 00:20:34.280 Flush: Supported 00:20:34.280 Reservation: Supported 00:20:34.280 Namespace Sharing Capabilities: Multiple Controllers 00:20:34.280 Size (in LBAs): 131072 (0GiB) 00:20:34.280 Capacity (in LBAs): 131072 (0GiB) 00:20:34.280 Utilization (in LBAs): 131072 (0GiB) 00:20:34.280 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:34.280 EUI64: ABCDEF0123456789 00:20:34.280 UUID: 9b22b228-7e4b-4486-8007-ab618cd260c4 00:20:34.280 Thin Provisioning: Not Supported 00:20:34.280 Per-NS Atomic Units: Yes 00:20:34.280 Atomic Boundary Size (Normal): 0 00:20:34.280 Atomic Boundary Size (PFail): 0 00:20:34.280 Atomic Boundary Offset: 0 00:20:34.280 Maximum Single Source Range Length: 65535 00:20:34.280 Maximum Copy Length: 65535 00:20:34.280 Maximum Source Range Count: 1 00:20:34.280 NGUID/EUI64 Never Reused: No 00:20:34.280 Namespace Write Protected: No 00:20:34.280 Number of LBA Formats: 1 00:20:34.280 Current LBA Format: LBA Format #00 00:20:34.280 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:34.280 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:34.280 rmmod nvme_tcp 00:20:34.280 rmmod nvme_fabrics 00:20:34.280 rmmod nvme_keyring 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:34.280 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 969396 ']' 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 969396 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 969396 ']' 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 969396 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 969396 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 969396' 00:20:34.281 killing process with pid 969396 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 969396 00:20:34.281 14:22:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 969396 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.540 14:22:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:37.077 00:20:37.077 real 0m5.652s 00:20:37.077 user 0m5.005s 00:20:37.077 sys 0m1.924s 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:37.077 ************************************ 00:20:37.077 END TEST nvmf_identify 00:20:37.077 ************************************ 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:37.077 ************************************ 00:20:37.077 START TEST nvmf_perf 00:20:37.077 ************************************ 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:37.077 * Looking for test storage... 00:20:37.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:37.077 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:37.078 14:22:06 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.982 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:38.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:38.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:38.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:38.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:20:38.983 00:20:38.983 --- 10.0.0.2 ping statistics --- 00:20:38.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.983 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:20:38.983 00:20:38.983 --- 10.0.0.1 ping statistics --- 00:20:38.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.983 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=971490 00:20:38.983 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 971490 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 971490 ']' 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.984 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:38.984 [2024-07-25 14:22:08.493737] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:20:38.984 [2024-07-25 14:22:08.493825] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.984 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.984 [2024-07-25 14:22:08.564871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.242 [2024-07-25 14:22:08.680812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.242 [2024-07-25 14:22:08.680869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.242 [2024-07-25 14:22:08.680884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.242 [2024-07-25 14:22:08.680896] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.242 [2024-07-25 14:22:08.680905] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.242 [2024-07-25 14:22:08.680969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.242 [2024-07-25 14:22:08.680996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.242 [2024-07-25 14:22:08.681056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.242 [2024-07-25 14:22:08.681064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:39.242 14:22:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:42.531 14:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:42.531 14:22:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:42.790 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:42.790 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:43.048 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:43.048 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:43.048 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:43.048 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:43.048 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.306 [2024-07-25 14:22:12.719908] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.306 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.564 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:43.564 14:22:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:43.825 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:43.825 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:44.084 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:44.084 [2024-07-25 14:22:13.715447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:44.084 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:44.342 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:44.342 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:44.342 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:44.342 14:22:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:45.720 Initializing NVMe Controllers 00:20:45.720 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:45.720 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:45.720 Initialization complete. Launching workers. 00:20:45.720 ======================================================== 00:20:45.720 Latency(us) 00:20:45.720 Device Information : IOPS MiB/s Average min max 00:20:45.720 PCIE (0000:88:00.0) NSID 1 from core 0: 85365.77 333.46 374.35 33.96 5297.04 00:20:45.720 ======================================================== 00:20:45.720 Total : 85365.77 333.46 374.35 33.96 5297.04 00:20:45.720 00:20:45.720 14:22:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:45.720 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.095 Initializing NVMe Controllers 00:20:47.095 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.095 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.095 Initialization complete. Launching workers. 00:20:47.095 ======================================================== 00:20:47.095 Latency(us) 00:20:47.095 Device Information : IOPS MiB/s Average min max 00:20:47.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.00 0.28 14597.48 142.58 47060.62 00:20:47.095 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21758.27 6958.23 55846.44 00:20:47.095 ======================================================== 00:20:47.095 Total : 119.00 0.46 17485.87 142.58 55846.44 00:20:47.095 00:20:47.095 14:22:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:47.095 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.472 Initializing NVMe Controllers 00:20:48.472 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:48.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:48.472 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:48.472 Initialization complete. Launching workers. 00:20:48.472 ======================================================== 00:20:48.472 Latency(us) 00:20:48.472 Device Information : IOPS MiB/s Average min max 00:20:48.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8360.60 32.66 3827.80 562.73 10877.07 00:20:48.472 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3710.06 14.49 8655.32 4822.33 23968.69 00:20:48.472 ======================================================== 00:20:48.472 Total : 12070.66 47.15 5311.59 562.73 23968.69 00:20:48.472 00:20:48.472 14:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:48.472 14:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:48.472 14:22:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:48.472 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.001 Initializing NVMe Controllers 00:20:51.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.001 Controller IO queue size 128, less than required. 00:20:51.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.001 Controller IO queue size 128, less than required. 00:20:51.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:51.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:51.001 Initialization complete. Launching workers. 00:20:51.001 ======================================================== 00:20:51.001 Latency(us) 00:20:51.001 Device Information : IOPS MiB/s Average min max 00:20:51.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1579.49 394.87 82837.92 63059.76 136402.88 00:20:51.001 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 567.82 141.95 233884.27 85363.57 350501.48 00:20:51.001 ======================================================== 00:20:51.001 Total : 2147.30 536.83 122779.41 63059.76 350501.48 00:20:51.001 00:20:51.001 14:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:51.001 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.001 No valid NVMe controllers or AIO or URING devices found 00:20:51.001 Initializing NVMe Controllers 00:20:51.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:51.001 Controller IO queue size 128, less than required. 00:20:51.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.001 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:51.001 Controller IO queue size 128, less than required. 00:20:51.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:51.001 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:51.001 WARNING: Some requested NVMe devices were skipped 00:20:51.001 14:22:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:51.001 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.529 Initializing NVMe Controllers 00:20:53.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.529 Controller IO queue size 128, less than required. 00:20:53.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.529 Controller IO queue size 128, less than required. 00:20:53.529 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:53.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.529 Initialization complete. Launching workers. 00:20:53.529 00:20:53.529 ==================== 00:20:53.529 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:53.529 TCP transport: 00:20:53.529 polls: 9237 00:20:53.529 idle_polls: 6025 00:20:53.529 sock_completions: 3212 00:20:53.529 nvme_completions: 5853 00:20:53.529 submitted_requests: 8844 00:20:53.529 queued_requests: 1 00:20:53.529 00:20:53.529 ==================== 00:20:53.529 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:53.529 TCP transport: 00:20:53.529 polls: 6561 00:20:53.529 idle_polls: 2989 00:20:53.529 sock_completions: 3572 00:20:53.529 nvme_completions: 6367 00:20:53.529 submitted_requests: 9588 00:20:53.529 queued_requests: 1 00:20:53.529 ======================================================== 00:20:53.529 Latency(us) 00:20:53.529 Device Information : IOPS MiB/s Average min max 00:20:53.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1460.75 365.19 90330.28 66191.98 151876.59 00:20:53.529 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1589.05 397.26 81403.33 46727.14 112154.10 00:20:53.529 ======================================================== 00:20:53.529 Total : 3049.80 762.45 85679.03 46727.14 151876.59 00:20:53.529 00:20:53.529 14:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:53.529 14:22:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.787 rmmod nvme_tcp 00:20:53.787 rmmod nvme_fabrics 00:20:53.787 rmmod nvme_keyring 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 971490 ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 971490 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 971490 ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 971490 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 971490 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 971490' 00:20:53.787 killing process with pid 971490 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 971490 00:20:53.787 14:22:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 971490 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.725 14:22:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:57.632 00:20:57.632 real 0m20.912s 00:20:57.632 user 1m3.828s 00:20:57.632 sys 0m5.379s 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:57.632 ************************************ 00:20:57.632 END TEST nvmf_perf 00:20:57.632 ************************************ 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.632 ************************************ 00:20:57.632 START TEST nvmf_fio_host 00:20:57.632 ************************************ 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:57.632 * Looking for test storage... 00:20:57.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.632 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.633 14:22:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.162 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:00.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:00.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:00.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:00.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:21:00.163 00:21:00.163 --- 10.0.0.2 ping statistics --- 00:21:00.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.163 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:21:00.163 00:21:00.163 --- 10.0.0.1 ping statistics --- 00:21:00.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.163 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.163 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=975409 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 975409 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 975409 ']' 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:00.164 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.164 [2024-07-25 14:22:29.542387] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:00.164 [2024-07-25 14:22:29.542459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.164 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.164 [2024-07-25 14:22:29.607943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.164 [2024-07-25 14:22:29.712538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.164 [2024-07-25 14:22:29.712610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.164 [2024-07-25 14:22:29.712634] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.164 [2024-07-25 14:22:29.712645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.164 [2024-07-25 14:22:29.712654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.164 [2024-07-25 14:22:29.712742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.164 [2024-07-25 14:22:29.712805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.164 [2024-07-25 14:22:29.712874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.164 [2024-07-25 14:22:29.712871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.422 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:00.422 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:00.422 14:22:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:00.680 [2024-07-25 14:22:30.126657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:00.680 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:00.680 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:00.680 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.680 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:00.938 Malloc1 00:21:00.938 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.195 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:01.452 14:22:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.709 [2024-07-25 14:22:31.202638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.709 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:01.966 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:01.966 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:01.967 14:22:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:02.224 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:02.224 fio-3.35 00:21:02.224 Starting 1 thread 00:21:02.224 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.750 00:21:04.750 test: (groupid=0, jobs=1): err= 0: pid=975772: Thu Jul 25 14:22:34 2024 00:21:04.750 read: IOPS=8705, BW=34.0MiB/s (35.7MB/s)(68.2MiB/2006msec) 00:21:04.750 slat (nsec): min=1971, max=157156, avg=2712.53, stdev=1965.53 00:21:04.750 clat (usec): min=2637, max=13420, avg=8051.88, stdev=648.98 00:21:04.750 lat (usec): min=2668, max=13422, avg=8054.59, stdev=648.86 00:21:04.750 clat percentiles (usec): 00:21:04.750 | 1.00th=[ 6521], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7570], 00:21:04.750 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:21:04.750 | 70.00th=[ 8356], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:21:04.750 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11338], 99.95th=[12911], 00:21:04.750 | 99.99th=[13304] 00:21:04.750 bw ( KiB/s): min=33760, max=35400, per=99.89%, avg=34782.00, stdev=714.06, samples=4 00:21:04.750 iops : min= 8440, max= 8850, avg=8695.50, stdev=178.52, samples=4 00:21:04.750 write: IOPS=8698, BW=34.0MiB/s (35.6MB/s)(68.2MiB/2006msec); 0 zone resets 00:21:04.750 slat (usec): min=2, max=157, avg= 2.88, stdev= 1.67 00:21:04.750 clat (usec): min=1516, max=11232, avg=6602.76, stdev=535.37 00:21:04.750 lat (usec): min=1525, max=11234, avg=6605.64, stdev=535.30 00:21:04.750 clat percentiles (usec): 00:21:04.750 | 1.00th=[ 5473], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:21:04.750 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:21:04.750 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:21:04.750 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[10421], 99.95th=[10945], 00:21:04.750 | 99.99th=[11207] 00:21:04.750 bw ( KiB/s): min=34552, max=35048, per=99.98%, avg=34790.00, stdev=229.68, samples=4 00:21:04.750 iops : min= 8638, max= 8762, avg=8697.50, stdev=57.42, samples=4 00:21:04.750 lat (msec) : 2=0.02%, 4=0.12%, 10=99.70%, 20=0.16% 00:21:04.750 cpu : usr=60.45%, sys=37.66%, ctx=88, majf=0, minf=40 00:21:04.750 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:04.750 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:04.750 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:04.750 issued rwts: total=17463,17450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:04.750 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:04.750 00:21:04.750 Run status group 0 (all jobs): 00:21:04.751 READ: bw=34.0MiB/s (35.7MB/s), 34.0MiB/s-34.0MiB/s (35.7MB/s-35.7MB/s), io=68.2MiB (71.5MB), run=2006-2006msec 00:21:04.751 WRITE: bw=34.0MiB/s (35.6MB/s), 34.0MiB/s-34.0MiB/s (35.6MB/s-35.6MB/s), io=68.2MiB (71.5MB), run=2006-2006msec 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:04.751 14:22:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:04.751 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:04.751 fio-3.35 00:21:04.751 Starting 1 thread 00:21:04.751 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.281 00:21:07.281 test: (groupid=0, jobs=1): err= 0: pid=976219: Thu Jul 25 14:22:36 2024 00:21:07.281 read: IOPS=8376, BW=131MiB/s (137MB/s)(263MiB/2008msec) 00:21:07.281 slat (nsec): min=2812, max=90460, avg=3535.54, stdev=1553.09 00:21:07.281 clat (usec): min=1573, max=19557, avg=8752.76, stdev=2037.50 00:21:07.281 lat (usec): min=1577, max=19561, avg=8756.30, stdev=2037.51 00:21:07.281 clat percentiles (usec): 00:21:07.281 | 1.00th=[ 4752], 5.00th=[ 5604], 10.00th=[ 6259], 20.00th=[ 7046], 00:21:07.281 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8717], 60.00th=[ 9110], 00:21:07.281 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12387], 00:21:07.281 | 99.00th=[14091], 99.50th=[14746], 99.90th=[16188], 99.95th=[16450], 00:21:07.281 | 99.99th=[16581] 00:21:07.281 bw ( KiB/s): min=62976, max=75840, per=52.01%, avg=69712.00, stdev=7075.91, samples=4 00:21:07.281 iops : min= 3936, max= 4740, avg=4357.00, stdev=442.24, samples=4 00:21:07.281 write: IOPS=4922, BW=76.9MiB/s (80.7MB/s)(143MiB/1854msec); 0 zone resets 00:21:07.281 slat (usec): min=30, max=138, avg=33.02, stdev= 4.74 00:21:07.281 clat (usec): min=2845, max=21207, avg=11515.99, stdev=1906.79 00:21:07.281 lat (usec): min=2878, max=21243, avg=11549.02, stdev=1906.75 00:21:07.281 clat percentiles (usec): 00:21:07.281 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9896], 00:21:07.281 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:21:07.281 | 70.00th=[12387], 80.00th=[13173], 90.00th=[14091], 95.00th=[14746], 00:21:07.281 | 99.00th=[15926], 99.50th=[16712], 99.90th=[20055], 99.95th=[20317], 00:21:07.281 | 99.99th=[21103] 00:21:07.281 bw ( KiB/s): min=65536, max=79072, per=92.06%, avg=72512.00, stdev=7388.84, samples=4 00:21:07.281 iops : min= 4096, max= 4942, avg=4532.00, stdev=461.80, samples=4 00:21:07.281 lat (msec) : 2=0.03%, 4=0.24%, 10=56.74%, 20=42.95%, 50=0.04% 00:21:07.281 cpu : usr=76.44%, sys=21.96%, ctx=37, majf=0, minf=69 00:21:07.281 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:07.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.281 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.281 issued rwts: total=16821,9127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.281 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.281 00:21:07.281 Run status group 0 (all jobs): 00:21:07.281 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2008-2008msec 00:21:07.281 WRITE: bw=76.9MiB/s (80.7MB/s), 76.9MiB/s-76.9MiB/s (80.7MB/s-80.7MB/s), io=143MiB (150MB), run=1854-1854msec 00:21:07.281 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.539 14:22:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.539 rmmod nvme_tcp 00:21:07.539 rmmod nvme_fabrics 00:21:07.539 rmmod nvme_keyring 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 975409 ']' 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 975409 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 975409 ']' 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 975409 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 975409 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 975409' 00:21:07.539 killing process with pid 975409 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 975409 00:21:07.539 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 975409 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.799 14:22:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:10.335 00:21:10.335 real 0m12.271s 00:21:10.335 user 0m35.363s 00:21:10.335 sys 0m4.277s 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.335 ************************************ 00:21:10.335 END TEST nvmf_fio_host 00:21:10.335 ************************************ 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:10.335 14:22:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.335 ************************************ 00:21:10.336 START TEST nvmf_failover 00:21:10.336 ************************************ 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:10.336 * Looking for test storage... 00:21:10.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:10.336 14:22:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:12.243 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:12.243 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:12.243 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:12.243 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:12.243 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:12.243 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.243 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:21:12.243 00:21:12.243 --- 10.0.0.2 ping statistics --- 00:21:12.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.244 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:12.244 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.244 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:12.244 00:21:12.244 --- 10.0.0.1 ping statistics --- 00:21:12.244 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.244 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=978415 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 978415 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 978415 ']' 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:12.244 14:22:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.244 [2024-07-25 14:22:41.731297] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:12.244 [2024-07-25 14:22:41.731384] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.244 EAL: No free 2048 kB hugepages reported on node 1 00:21:12.244 [2024-07-25 14:22:41.792983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:12.501 [2024-07-25 14:22:41.897853] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.501 [2024-07-25 14:22:41.897911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.501 [2024-07-25 14:22:41.897933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.501 [2024-07-25 14:22:41.897944] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.501 [2024-07-25 14:22:41.897968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.501 [2024-07-25 14:22:41.898097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:12.501 [2024-07-25 14:22:41.898139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:12.501 [2024-07-25 14:22:41.898143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.501 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:12.758 [2024-07-25 14:22:42.259142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.758 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:13.017 Malloc0 00:21:13.017 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:13.274 14:22:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:13.532 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:13.790 [2024-07-25 14:22:43.301728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:13.790 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:14.046 [2024-07-25 14:22:43.594643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:14.046 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:14.303 [2024-07-25 14:22:43.867438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=978705 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 978705 /var/tmp/bdevperf.sock 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 978705 ']' 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.303 14:22:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:14.561 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:14.561 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:14.561 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.164 NVMe0n1 00:21:15.164 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.422 00:21:15.422 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=978847 00:21:15.422 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:15.422 14:22:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:16.355 14:22:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.613 [2024-07-25 14:22:46.180597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180746] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180819] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.180996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.181010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.181023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.181036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.181066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.613 [2024-07-25 14:22:46.181084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.614 [2024-07-25 14:22:46.181096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.614 [2024-07-25 14:22:46.181109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.614 [2024-07-25 14:22:46.181122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.614 [2024-07-25 14:22:46.181135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955f40 is same with the state(5) to be set 00:21:16.614 14:22:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:19.892 14:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:20.150 00:21:20.150 14:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:20.409 [2024-07-25 14:22:49.866906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.866975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.409 [2024-07-25 14:22:49.867129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867833] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 [2024-07-25 14:22:49.867868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1956d10 is same with the state(5) to be set 00:21:20.410 14:22:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:23.692 14:22:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.692 [2024-07-25 14:22:53.131843] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.692 14:22:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:24.624 14:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:24.883 [2024-07-25 14:22:54.386432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.883 [2024-07-25 14:22:54.386671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386899] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.386999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.387011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.387023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.387048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.387071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 [2024-07-25 14:22:54.387085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957ab0 is same with the state(5) to be set 00:21:24.884 14:22:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 978847 00:21:31.449 0 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 978705 ']' 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 978705' 00:21:31.450 killing process with pid 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 978705 00:21:31.450 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.450 [2024-07-25 14:22:43.931142] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:31.450 [2024-07-25 14:22:43.931238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid978705 ] 00:21:31.450 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.450 [2024-07-25 14:22:43.990634] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.450 [2024-07-25 14:22:44.097888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.450 Running I/O for 15 seconds... 00:21:31.450 [2024-07-25 14:22:46.181956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-07-25 14:22:46.182014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-07-25 14:22:46.182047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-07-25 14:22:46.182085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.450 [2024-07-25 14:22:46.182113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd10f0 is same with the state(5) to be set 00:21:31.450 [2024-07-25 14:22:46.182208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.450 [2024-07-25 14:22:46.182800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.450 [2024-07-25 14:22:46.182819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.182834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.182891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.451 [2024-07-25 14:22:46.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.451 [2024-07-25 14:22:46.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.182977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.182991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.451 [2024-07-25 14:22:46.183796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.451 [2024-07-25 14:22:46.183810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.183980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.184026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.184057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.184112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.184143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.452 [2024-07-25 14:22:46.184644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.452 [2024-07-25 14:22:46.184748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.452 [2024-07-25 14:22:46.184763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.184972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.184987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.453 [2024-07-25 14:22:46.185645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.453 [2024-07-25 14:22:46.185659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.185981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.185996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:46.186189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.454 [2024-07-25 14:22:46.186233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.454 [2024-07-25 14:22:46.186245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79736 len:8 PRP1 0x0 PRP2 0x0 00:21:31.454 [2024-07-25 14:22:46.186259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:46.186324] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1deec10 was disconnected and freed. reset controller. 00:21:31.454 [2024-07-25 14:22:46.186344] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:31.454 [2024-07-25 14:22:46.186360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.454 [2024-07-25 14:22:46.189671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.454 [2024-07-25 14:22:46.189711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd10f0 (9): Bad file descriptor 00:21:31.454 [2024-07-25 14:22:46.222918] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:31.454 [2024-07-25 14:22:49.868535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.454 [2024-07-25 14:22:49.868760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.454 [2024-07-25 14:22:49.868774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.868985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.868998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:82568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.455 [2024-07-25 14:22:49.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.455 [2024-07-25 14:22:49.869453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:82624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.455 [2024-07-25 14:22:49.869481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.455 [2024-07-25 14:22:49.869495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.455 [2024-07-25 14:22:49.869509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.456 [2024-07-25 14:22:49.870450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.456 [2024-07-25 14:22:49.870465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.870980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.870994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.457 [2024-07-25 14:22:49.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.457 [2024-07-25 14:22:49.871474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.458 [2024-07-25 14:22:49.871688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.458 [2024-07-25 14:22:49.871964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.871996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.458 [2024-07-25 14:22:49.872136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.458 [2024-07-25 14:22:49.872165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.458 [2024-07-25 14:22:49.872200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.458 [2024-07-25 14:22:49.872227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd10f0 is same with the state(5) to be set 00:21:31.458 [2024-07-25 14:22:49.872494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872622] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872658] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:21:31.458 [2024-07-25 14:22:49.872746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.458 [2024-07-25 14:22:49.872759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.458 [2024-07-25 14:22:49.872770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.458 [2024-07-25 14:22:49.872781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.872793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.872807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.872818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.872828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.872846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.872859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.872870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.872881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.872894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.872910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.872921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.872932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.872945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.872958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.872969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.872980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.872992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83344 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83352 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83360 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83368 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83376 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83384 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83392 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873422] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83400 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.873591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.873605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.873617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.873627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.889002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.889034] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.889070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.889084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.889099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.889133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.889145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.889163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:21:31.459 [2024-07-25 14:22:49.889177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.459 [2024-07-25 14:22:49.889191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.459 [2024-07-25 14:22:49.889203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.459 [2024-07-25 14:22:49.889214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82448 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889374] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82456 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82520 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82528 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82536 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.889959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.889971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.889982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82552 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.889994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.890007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.890017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.890029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82560 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.890056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.890079] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.890090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.890116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82568 len:8 PRP1 0x0 PRP2 0x0 00:21:31.460 [2024-07-25 14:22:49.890130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.460 [2024-07-25 14:22:49.890144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.460 [2024-07-25 14:22:49.890156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.460 [2024-07-25 14:22:49.890168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890687] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890915] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:21:31.461 [2024-07-25 14:22:49.890948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.461 [2024-07-25 14:22:49.890961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.461 [2024-07-25 14:22:49.890972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.461 [2024-07-25 14:22:49.890983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.890995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891213] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891741] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891776] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.462 [2024-07-25 14:22:49.891916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.462 [2024-07-25 14:22:49.891927] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.462 [2024-07-25 14:22:49.891938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:21:31.462 [2024-07-25 14:22:49.891950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.891962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.891973] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.891986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.891999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892506] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.892660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.892671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.892683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.892696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.905142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.905172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.905187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.905204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.905218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.905231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.905244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.905258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.905269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.905281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82984 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.905299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.905318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.905330] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.905341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82992 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.905354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.463 [2024-07-25 14:22:49.905368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.463 [2024-07-25 14:22:49.905379] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.463 [2024-07-25 14:22:49.905390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:21:31.463 [2024-07-25 14:22:49.905423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83016 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905844] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905880] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.905960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.905973] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.905984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.905995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906255] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.464 [2024-07-25 14:22:49.906326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:21:31.464 [2024-07-25 14:22:49.906348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.464 [2024-07-25 14:22:49.906362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.464 [2024-07-25 14:22:49.906387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906534] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906582] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906724] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906952] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.906963] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.906974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.906986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.906999] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.907010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.907020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.907033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.907046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.465 [2024-07-25 14:22:49.907057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.465 [2024-07-25 14:22:49.907092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:21:31.465 [2024-07-25 14:22:49.907105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:49.907171] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1dffd40 was disconnected and freed. reset controller. 00:21:31.465 [2024-07-25 14:22:49.907191] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:31.465 [2024-07-25 14:22:49.907207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.465 [2024-07-25 14:22:49.907265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd10f0 (9): Bad file descriptor 00:21:31.465 [2024-07-25 14:22:49.910469] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.465 [2024-07-25 14:22:49.938730] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:31.465 [2024-07-25 14:22:54.386237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.465 [2024-07-25 14:22:54.386309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:54.386328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.465 [2024-07-25 14:22:54.386343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:54.386357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.465 [2024-07-25 14:22:54.386370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:54.386405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.465 [2024-07-25 14:22:54.386419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:54.386432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd10f0 is same with the state(5) to be set 00:21:31.465 [2024-07-25 14:22:54.388220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.465 [2024-07-25 14:22:54.388245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.465 [2024-07-25 14:22:54.388272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.388979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.388993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.389008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.389022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.389036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:4096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.389073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.389090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.389119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.389136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.466 [2024-07-25 14:22:54.389151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.466 [2024-07-25 14:22:54.389167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:4136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.389971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.389984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.467 [2024-07-25 14:22:54.390000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.467 [2024-07-25 14:22:54.390013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:4400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.468 [2024-07-25 14:22:54.390679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:4536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.468 [2024-07-25 14:22:54.390939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.468 [2024-07-25 14:22:54.390954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.390968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.390983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.390997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:4600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:4648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:31.469 [2024-07-25 14:22:54.391652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4776 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4784 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391843] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4792 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.391953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4808 len:8 PRP1 0x0 PRP2 0x0 00:21:31.469 [2024-07-25 14:22:54.391966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.469 [2024-07-25 14:22:54.391980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.469 [2024-07-25 14:22:54.391991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.469 [2024-07-25 14:22:54.392002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4816 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392029] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4824 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4840 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392222] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4848 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392261] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4856 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392323] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4872 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4880 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4888 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392525] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:31.470 [2024-07-25 14:22:54.392536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:31.470 [2024-07-25 14:22:54.392558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:8 PRP1 0x0 PRP2 0x0 00:21:31.470 [2024-07-25 14:22:54.392572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.470 [2024-07-25 14:22:54.392629] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e01b40 was disconnected and freed. reset controller. 00:21:31.470 [2024-07-25 14:22:54.392648] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:31.470 [2024-07-25 14:22:54.392664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.470 [2024-07-25 14:22:54.395935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.470 [2024-07-25 14:22:54.395977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd10f0 (9): Bad file descriptor 00:21:31.470 [2024-07-25 14:22:54.433409] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:31.470 00:21:31.470 Latency(us) 00:21:31.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.470 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:31.470 Verification LBA range: start 0x0 length 0x4000 00:21:31.470 NVMe0n1 : 15.01 8648.53 33.78 244.39 0.00 14365.69 801.00 46215.02 00:21:31.470 =================================================================================================================== 00:21:31.470 Total : 8648.53 33.78 244.39 0.00 14365.69 801.00 46215.02 00:21:31.470 Received shutdown signal, test time was about 15.000000 seconds 00:21:31.470 00:21:31.470 Latency(us) 00:21:31.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.470 =================================================================================================================== 00:21:31.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=980572 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 980572 /var/tmp/bdevperf.sock 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 980572 ']' 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.470 [2024-07-25 14:23:00.935329] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.470 14:23:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:31.727 [2024-07-25 14:23:01.179993] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:31.727 14:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.983 NVMe0n1 00:21:31.983 14:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.239 00:21:32.239 14:23:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.803 00:21:32.803 14:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.803 14:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:33.060 14:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.318 14:23:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:36.647 14:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.647 14:23:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:36.647 14:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=981235 00:21:36.647 14:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:36.647 14:23:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 981235 00:21:37.580 0 00:21:37.580 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:37.580 [2024-07-25 14:23:00.438560] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:37.580 [2024-07-25 14:23:00.438660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid980572 ] 00:21:37.580 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.580 [2024-07-25 14:23:00.498464] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.580 [2024-07-25 14:23:00.604921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.580 [2024-07-25 14:23:02.789021] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:37.580 [2024-07-25 14:23:02.789152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.580 [2024-07-25 14:23:02.789176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.580 [2024-07-25 14:23:02.789196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.580 [2024-07-25 14:23:02.789211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.580 [2024-07-25 14:23:02.789227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.580 [2024-07-25 14:23:02.789242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.580 [2024-07-25 14:23:02.789257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:37.580 [2024-07-25 14:23:02.789271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:37.580 [2024-07-25 14:23:02.789286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:37.580 [2024-07-25 14:23:02.789343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:37.580 [2024-07-25 14:23:02.789390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x93d0f0 (9): Bad file descriptor 00:21:37.580 [2024-07-25 14:23:02.843439] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:37.580 Running I/O for 1 seconds... 00:21:37.580 00:21:37.580 Latency(us) 00:21:37.580 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.580 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:37.580 Verification LBA range: start 0x0 length 0x4000 00:21:37.580 NVMe0n1 : 1.05 8491.47 33.17 0.00 0.00 14454.89 3203.98 44079.03 00:21:37.580 =================================================================================================================== 00:21:37.580 Total : 8491.47 33.17 0.00 0.00 14454.89 3203.98 44079.03 00:21:37.580 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.580 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:38.145 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.145 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:38.145 14:23:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:38.447 14:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:38.705 14:23:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 980572 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 980572 ']' 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 980572 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 980572 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 980572' 00:21:41.982 killing process with pid 980572 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 980572 00:21:41.982 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 980572 00:21:42.240 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:42.240 14:23:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.497 rmmod nvme_tcp 00:21:42.497 rmmod nvme_fabrics 00:21:42.497 rmmod nvme_keyring 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 978415 ']' 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 978415 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 978415 ']' 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 978415 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 978415 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 978415' 00:21:42.497 killing process with pid 978415 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 978415 00:21:42.497 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 978415 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.063 14:23:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.966 00:21:44.966 real 0m35.036s 00:21:44.966 user 2m3.508s 00:21:44.966 sys 0m5.809s 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 ************************************ 00:21:44.966 END TEST nvmf_failover 00:21:44.966 ************************************ 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:44.966 ************************************ 00:21:44.966 START TEST nvmf_host_discovery 00:21:44.966 ************************************ 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:44.966 * Looking for test storage... 00:21:44.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:44.966 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:44.967 14:23:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:46.872 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:46.872 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:46.872 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:46.872 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.872 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:21:47.131 00:21:47.131 --- 10.0.0.2 ping statistics --- 00:21:47.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.131 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:21:47.131 00:21:47.131 --- 10.0.0.1 ping statistics --- 00:21:47.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.131 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=983955 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 983955 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 983955 ']' 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.131 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.131 [2024-07-25 14:23:16.700477] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:47.131 [2024-07-25 14:23:16.700564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.131 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.131 [2024-07-25 14:23:16.762223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.390 [2024-07-25 14:23:16.867515] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.390 [2024-07-25 14:23:16.867570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.390 [2024-07-25 14:23:16.867593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.390 [2024-07-25 14:23:16.867604] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.390 [2024-07-25 14:23:16.867614] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.390 [2024-07-25 14:23:16.867640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.390 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.390 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:47.390 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:47.390 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:47.390 14:23:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 [2024-07-25 14:23:17.018154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 [2024-07-25 14:23:17.026382] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.390 null0 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.390 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.648 null1 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=983979 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 983979 /tmp/host.sock 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 983979 ']' 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:47.648 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.648 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.648 [2024-07-25 14:23:17.097892] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:21:47.648 [2024-07-25 14:23:17.097961] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid983979 ] 00:21:47.648 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.648 [2024-07-25 14:23:17.154324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.648 [2024-07-25 14:23:17.258294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.909 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.910 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 [2024-07-25 14:23:17.668003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:48.168 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.426 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:48.426 14:23:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:48.993 [2024-07-25 14:23:18.437808] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.993 [2024-07-25 14:23:18.437845] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.993 [2024-07-25 14:23:18.437868] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.993 [2024-07-25 14:23:18.566326] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:49.251 [2024-07-25 14:23:18.667970] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.251 [2024-07-25 14:23:18.667997] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.251 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.510 14:23:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.510 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.768 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 [2024-07-25 14:23:19.296633] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.769 [2024-07-25 14:23:19.297652] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:49.769 [2024-07-25 14:23:19.297689] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:49.769 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.027 [2024-07-25 14:23:19.424221] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:50.027 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:50.027 14:23:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:50.285 [2024-07-25 14:23:19.684326] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:50.285 [2024-07-25 14:23:19.684348] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:50.285 [2024-07-25 14:23:19.684371] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.852 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.111 [2024-07-25 14:23:20.513151] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:51.111 [2024-07-25 14:23:20.513194] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:51.111 [2024-07-25 14:23:20.519417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.111 [2024-07-25 14:23:20.519453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.111 [2024-07-25 14:23:20.519471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:51.111 [2024-07-25 14:23:20.519486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.111 [2024-07-25 14:23:20.519501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.111 [2024-07-25 14:23:20.519515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.111 [2024-07-25 14:23:20.519539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:51.111 [2024-07-25 14:23:20.519553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:51.111 [2024-07-25 14:23:20.519575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:51.111 [2024-07-25 14:23:20.529420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.111 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.111 [2024-07-25 14:23:20.539454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.111 [2024-07-25 14:23:20.539669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.111 [2024-07-25 14:23:20.539712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.111 [2024-07-25 14:23:20.539730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.111 [2024-07-25 14:23:20.539755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.111 [2024-07-25 14:23:20.539790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.111 [2024-07-25 14:23:20.539808] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.539826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.539847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 [2024-07-25 14:23:20.549546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.112 [2024-07-25 14:23:20.549763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.112 [2024-07-25 14:23:20.549790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.112 [2024-07-25 14:23:20.549806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.112 [2024-07-25 14:23:20.549828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.112 [2024-07-25 14:23:20.549848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.112 [2024-07-25 14:23:20.549863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.549876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.549894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:51.112 [2024-07-25 14:23:20.559616] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.112 [2024-07-25 14:23:20.559810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.112 [2024-07-25 14:23:20.559840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.112 [2024-07-25 14:23:20.559857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.112 [2024-07-25 14:23:20.559880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.112 [2024-07-25 14:23:20.560733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.112 [2024-07-25 14:23:20.560754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.560777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.560806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 [2024-07-25 14:23:20.569690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.112 [2024-07-25 14:23:20.569878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.112 [2024-07-25 14:23:20.569906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.112 [2024-07-25 14:23:20.569923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.112 [2024-07-25 14:23:20.569946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.112 [2024-07-25 14:23:20.569978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.112 [2024-07-25 14:23:20.569996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.570010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.570030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 [2024-07-25 14:23:20.579758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.112 [2024-07-25 14:23:20.579962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.112 [2024-07-25 14:23:20.579990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.112 [2024-07-25 14:23:20.580005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.112 [2024-07-25 14:23:20.580028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.112 [2024-07-25 14:23:20.580096] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.112 [2024-07-25 14:23:20.580116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.580130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.580156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.112 [2024-07-25 14:23:20.589825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.112 [2024-07-25 14:23:20.589997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.112 [2024-07-25 14:23:20.590023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.112 [2024-07-25 14:23:20.590053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.112 [2024-07-25 14:23:20.590086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.112 [2024-07-25 14:23:20.590107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.112 [2024-07-25 14:23:20.590122] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.112 [2024-07-25 14:23:20.590135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.112 [2024-07-25 14:23:20.590169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:51.112 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:51.113 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:51.113 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:51.113 [2024-07-25 14:23:20.599890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:51.113 [2024-07-25 14:23:20.600085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.113 [2024-07-25 14:23:20.600123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb26c20 with addr=10.0.0.2, port=4420 00:21:51.113 [2024-07-25 14:23:20.600140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb26c20 is same with the state(5) to be set 00:21:51.113 [2024-07-25 14:23:20.600163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb26c20 (9): Bad file descriptor 00:21:51.113 [2024-07-25 14:23:20.600378] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:51.113 [2024-07-25 14:23:20.600436] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:51.113 [2024-07-25 14:23:20.600489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:51.113 [2024-07-25 14:23:20.600509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:51.113 [2024-07-25 14:23:20.600522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:51.113 [2024-07-25 14:23:20.600543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:51.113 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.113 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:51.113 14:23:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.048 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.307 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.308 14:23:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.679 [2024-07-25 14:23:22.897717] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:53.679 [2024-07-25 14:23:22.897739] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:53.679 [2024-07-25 14:23:22.897759] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:53.680 [2024-07-25 14:23:23.025193] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:53.680 [2024-07-25 14:23:23.092188] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:53.680 [2024-07-25 14:23:23.092221] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 request: 00:21:53.680 { 00:21:53.680 "name": "nvme", 00:21:53.680 "trtype": "tcp", 00:21:53.680 "traddr": "10.0.0.2", 00:21:53.680 "adrfam": "ipv4", 00:21:53.680 "trsvcid": "8009", 00:21:53.680 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:53.680 "wait_for_attach": true, 00:21:53.680 "method": "bdev_nvme_start_discovery", 00:21:53.680 "req_id": 1 00:21:53.680 } 00:21:53.680 Got JSON-RPC error response 00:21:53.680 response: 00:21:53.680 { 00:21:53.680 "code": -17, 00:21:53.680 "message": "File exists" 00:21:53.680 } 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 request: 00:21:53.680 { 00:21:53.680 "name": "nvme_second", 00:21:53.680 "trtype": "tcp", 00:21:53.680 "traddr": "10.0.0.2", 00:21:53.680 "adrfam": "ipv4", 00:21:53.680 "trsvcid": "8009", 00:21:53.680 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:53.680 "wait_for_attach": true, 00:21:53.680 "method": "bdev_nvme_start_discovery", 00:21:53.680 "req_id": 1 00:21:53.680 } 00:21:53.680 Got JSON-RPC error response 00:21:53.680 response: 00:21:53.680 { 00:21:53.680 "code": -17, 00:21:53.680 "message": "File exists" 00:21:53.680 } 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.680 14:23:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.049 [2024-07-25 14:23:24.303721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.049 [2024-07-25 14:23:24.303788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb58da0 with addr=10.0.0.2, port=8010 00:21:55.049 [2024-07-25 14:23:24.303823] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:55.049 [2024-07-25 14:23:24.303839] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:55.049 [2024-07-25 14:23:24.303851] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:55.983 [2024-07-25 14:23:25.306003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:55.983 [2024-07-25 14:23:25.306037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb58da0 with addr=10.0.0.2, port=8010 00:21:55.983 [2024-07-25 14:23:25.306086] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:55.983 [2024-07-25 14:23:25.306100] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:55.983 [2024-07-25 14:23:25.306112] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:56.915 [2024-07-25 14:23:26.308281] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:56.915 request: 00:21:56.915 { 00:21:56.915 "name": "nvme_second", 00:21:56.915 "trtype": "tcp", 00:21:56.915 "traddr": "10.0.0.2", 00:21:56.915 "adrfam": "ipv4", 00:21:56.915 "trsvcid": "8010", 00:21:56.915 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.915 "wait_for_attach": false, 00:21:56.915 "attach_timeout_ms": 3000, 00:21:56.915 "method": "bdev_nvme_start_discovery", 00:21:56.915 "req_id": 1 00:21:56.915 } 00:21:56.915 Got JSON-RPC error response 00:21:56.915 response: 00:21:56.915 { 00:21:56.915 "code": -110, 00:21:56.915 "message": "Connection timed out" 00:21:56.915 } 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.915 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 983979 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:56.916 rmmod nvme_tcp 00:21:56.916 rmmod nvme_fabrics 00:21:56.916 rmmod nvme_keyring 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 983955 ']' 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 983955 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 983955 ']' 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 983955 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 983955 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 983955' 00:21:56.916 killing process with pid 983955 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 983955 00:21:56.916 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 983955 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.175 14:23:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:59.737 00:21:59.737 real 0m14.238s 00:21:59.737 user 0m21.267s 00:21:59.737 sys 0m2.838s 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 ************************************ 00:21:59.737 END TEST nvmf_host_discovery 00:21:59.737 ************************************ 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.737 ************************************ 00:21:59.737 START TEST nvmf_host_multipath_status 00:21:59.737 ************************************ 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:59.737 * Looking for test storage... 00:21:59.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:59.737 14:23:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.633 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:01.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:01.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:01.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:01.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:01.634 14:23:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:01.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:01.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:22:01.634 00:22:01.634 --- 10.0.0.2 ping statistics --- 00:22:01.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.634 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:01.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:01.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:22:01.634 00:22:01.634 --- 10.0.0.1 ping statistics --- 00:22:01.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:01.634 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=987152 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 987152 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 987152 ']' 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.634 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.634 [2024-07-25 14:23:31.140071] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:22:01.634 [2024-07-25 14:23:31.140154] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.634 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.634 [2024-07-25 14:23:31.202639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:01.892 [2024-07-25 14:23:31.312549] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.892 [2024-07-25 14:23:31.312602] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.892 [2024-07-25 14:23:31.312616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.892 [2024-07-25 14:23:31.312627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.892 [2024-07-25 14:23:31.312636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.892 [2024-07-25 14:23:31.312723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.892 [2024-07-25 14:23:31.312729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=987152 00:22:01.892 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:02.150 [2024-07-25 14:23:31.720025] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.150 14:23:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:02.407 Malloc0 00:22:02.407 14:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:02.664 14:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.922 14:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.179 [2024-07-25 14:23:32.755484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.179 14:23:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:03.437 [2024-07-25 14:23:32.992071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=987435 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 987435 /var/tmp/bdevperf.sock 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 987435 ']' 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.437 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:03.695 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.695 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:03.695 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:03.952 14:23:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:04.518 Nvme0n1 00:22:04.518 14:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:05.083 Nvme0n1 00:22:05.083 14:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:05.083 14:23:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:06.990 14:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:06.990 14:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:07.248 14:23:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:07.512 14:23:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:08.445 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:08.445 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:08.445 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.445 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.702 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.702 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.703 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.703 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.960 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.960 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.960 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.960 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:09.218 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.218 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:09.218 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.218 14:23:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:09.476 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.476 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:09.476 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.476 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:09.734 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.734 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:09.734 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:09.734 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.992 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.992 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:09.992 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:10.250 14:23:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:10.507 14:23:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.877 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:12.155 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.155 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:12.155 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.155 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:12.412 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.412 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:12.412 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.412 14:23:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:12.695 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.695 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:12.695 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.695 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.953 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.953 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:12.953 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.953 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:13.211 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.211 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:13.211 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:13.469 14:23:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:13.726 14:23:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:14.657 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:14.657 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.657 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.657 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.915 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.915 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.915 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.915 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.172 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.172 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.172 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.172 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.429 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.429 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.429 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.429 14:23:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.686 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.686 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.686 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.686 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.943 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.943 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:15.943 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.943 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:16.201 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.201 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:16.201 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:16.459 14:23:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.718 14:23:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:17.652 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:17.652 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:17.652 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.652 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.910 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.910 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.910 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.910 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:18.167 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.167 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:18.167 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.168 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.425 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.425 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.425 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.425 14:23:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.682 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.682 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:18.682 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.682 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.940 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.940 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.940 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.940 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.197 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.197 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:19.197 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:19.454 14:23:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:19.712 14:23:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:20.644 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:20.644 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.644 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.644 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.901 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.901 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:20.902 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.902 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.159 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.159 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.159 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.159 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.417 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.417 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.417 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.417 14:23:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.674 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.674 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.674 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.674 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.931 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.931 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:21.931 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.931 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.189 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.189 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:22.189 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:22.447 14:23:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:22.704 14:23:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:23.637 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:23.637 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:23.637 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.637 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.894 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.894 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:23.894 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.894 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:24.151 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.151 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:24.151 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.151 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.408 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.408 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.408 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.408 14:23:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.666 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.666 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:24.666 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.666 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.924 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:24.924 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.924 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.924 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:25.182 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.182 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:25.440 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:25.440 14:23:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:25.698 14:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:25.956 14:23:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:26.889 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:26.889 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:26.889 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.889 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:27.147 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.147 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:27.147 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.147 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.406 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.406 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.406 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.406 14:23:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.664 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.665 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.665 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.665 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.923 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.923 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.923 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.923 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.181 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.181 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.181 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.181 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.439 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.439 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:28.439 14:23:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:28.697 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:28.955 14:23:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:29.887 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:29.887 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:29.887 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.887 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.145 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.145 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:30.145 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.145 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.403 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.403 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.403 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.403 14:23:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.660 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.660 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.660 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.660 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:30.918 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.918 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:30.918 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.918 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.177 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.177 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:31.177 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.177 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.435 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.435 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:31.435 14:24:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:31.693 14:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:31.950 14:24:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:32.909 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:32.909 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:32.909 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.909 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:33.171 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.171 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:33.171 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.171 14:24:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.429 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.429 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.429 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.429 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.687 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.687 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.687 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.687 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:33.948 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.948 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:33.948 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.948 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:34.206 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.206 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:34.206 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.206 14:24:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.463 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.463 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:34.463 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:34.721 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:34.980 14:24:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:36.353 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:36.353 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:36.353 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.353 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:36.354 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.354 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:36.354 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.354 14:24:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:36.611 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.611 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:36.611 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.611 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.868 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.868 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.868 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.868 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:37.126 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.126 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:37.126 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.126 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:37.383 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:37.383 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:37.383 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:37.383 14:24:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 987435 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 987435 ']' 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 987435 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987435 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987435' 00:22:37.640 killing process with pid 987435 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 987435 00:22:37.640 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 987435 00:22:37.901 Connection closed with partial response: 00:22:37.901 00:22:37.901 00:22:37.901 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 987435 00:22:37.901 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:37.901 [2024-07-25 14:23:33.052240] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:22:37.901 [2024-07-25 14:23:33.052335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid987435 ] 00:22:37.901 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.901 [2024-07-25 14:23:33.113572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.901 [2024-07-25 14:23:33.228421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.901 Running I/O for 90 seconds... 00:22:37.901 [2024-07-25 14:23:48.884077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.884900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.884917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.901 [2024-07-25 14:23:48.885865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:37.901 [2024-07-25 14:23:48.885889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.885906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.885931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.885948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.885972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.885988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.886935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.886953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.902 [2024-07-25 14:23:48.887459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:37.902 [2024-07-25 14:23:48.887486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.887951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.887977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.903 [2024-07-25 14:23:48.888155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.888962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.888978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.889003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.903 [2024-07-25 14:23:48.889020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:37.903 [2024-07-25 14:23:48.889045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.889969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.889998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.904 [2024-07-25 14:23:48.890015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:23:48.890746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:23:48.890763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:24:04.590781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:24:04.590876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:37.904 [2024-07-25 14:24:04.590968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.904 [2024-07-25 14:24:04.590999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.591738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.591968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:92232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.591985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:92256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.592226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.592242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.593383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.593419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.593442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.593458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.593478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.593493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.593513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:92336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.593528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.593548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.905 [2024-07-25 14:24:04.593571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.594435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.594461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:37.905 [2024-07-25 14:24:04.594489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:37.905 [2024-07-25 14:24:04.594508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:37.905 Received shutdown signal, test time was about 32.534355 seconds 00:22:37.905 00:22:37.905 Latency(us) 00:22:37.905 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.905 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.905 Verification LBA range: start 0x0 length 0x4000 00:22:37.905 Nvme0n1 : 32.53 8103.15 31.65 0.00 0.00 15751.51 837.40 4026531.84 00:22:37.905 =================================================================================================================== 00:22:37.905 Total : 8103.15 31.65 0.00 0.00 15751.51 837.40 4026531.84 00:22:37.905 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:38.163 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:38.163 rmmod nvme_tcp 00:22:38.163 rmmod nvme_fabrics 00:22:38.420 rmmod nvme_keyring 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 987152 ']' 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 987152 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 987152 ']' 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 987152 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 987152 00:22:38.420 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:38.421 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:38.421 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 987152' 00:22:38.421 killing process with pid 987152 00:22:38.421 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 987152 00:22:38.421 14:24:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 987152 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.680 14:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:40.579 00:22:40.579 real 0m41.385s 00:22:40.579 user 2m4.406s 00:22:40.579 sys 0m10.673s 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:40.579 ************************************ 00:22:40.579 END TEST nvmf_host_multipath_status 00:22:40.579 ************************************ 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:40.579 14:24:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:40.837 ************************************ 00:22:40.837 START TEST nvmf_discovery_remove_ifc 00:22:40.837 ************************************ 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:40.837 * Looking for test storage... 00:22:40.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:40.837 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:40.838 14:24:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:43.370 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:43.370 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:43.370 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:43.370 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:43.370 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:43.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:43.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:22:43.371 00:22:43.371 --- 10.0.0.2 ping statistics --- 00:22:43.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.371 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:43.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:43.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:22:43.371 00:22:43.371 --- 10.0.0.1 ping statistics --- 00:22:43.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:43.371 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=994262 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 994262 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 994262 ']' 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 [2024-07-25 14:24:12.691084] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:22:43.371 [2024-07-25 14:24:12.691161] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.371 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.371 [2024-07-25 14:24:12.756603] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.371 [2024-07-25 14:24:12.861144] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.371 [2024-07-25 14:24:12.861199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.371 [2024-07-25 14:24:12.861228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.371 [2024-07-25 14:24:12.861247] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.371 [2024-07-25 14:24:12.861258] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.371 [2024-07-25 14:24:12.861285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.371 14:24:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.371 [2024-07-25 14:24:12.995448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.371 [2024-07-25 14:24:13.003612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:43.371 null0 00:22:43.630 [2024-07-25 14:24:13.035552] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=994295 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 994295 /tmp/host.sock 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 994295 ']' 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:43.630 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:43.630 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.630 [2024-07-25 14:24:13.100754] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:22:43.630 [2024-07-25 14:24:13.100832] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994295 ] 00:22:43.630 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.630 [2024-07-25 14:24:13.163955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.630 [2024-07-25 14:24:13.272633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:43.887 14:24:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 [2024-07-25 14:24:14.472732] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:45.258 [2024-07-25 14:24:14.472762] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:45.258 [2024-07-25 14:24:14.472787] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:45.258 [2024-07-25 14:24:14.601208] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:45.258 [2024-07-25 14:24:14.664447] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:45.258 [2024-07-25 14:24:14.664503] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:45.258 [2024-07-25 14:24:14.664542] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:45.258 [2024-07-25 14:24:14.664566] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:45.258 [2024-07-25 14:24:14.664596] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.258 [2024-07-25 14:24:14.671260] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b068e0 was disconnected and freed. delete nvme_qpair. 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.258 14:24:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.190 14:24:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.563 14:24:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:48.492 14:24:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.421 14:24:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.351 14:24:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.609 14:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:50.609 14:24:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.609 [2024-07-25 14:24:20.105981] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:50.609 [2024-07-25 14:24:20.106096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.609 [2024-07-25 14:24:20.106130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.609 [2024-07-25 14:24:20.106150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.609 [2024-07-25 14:24:20.106163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.609 [2024-07-25 14:24:20.106177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.609 [2024-07-25 14:24:20.106190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.609 [2024-07-25 14:24:20.106204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.609 [2024-07-25 14:24:20.106230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.609 [2024-07-25 14:24:20.106245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.609 [2024-07-25 14:24:20.106257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.609 [2024-07-25 14:24:20.106271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acd320 is same with the state(5) to be set 00:22:50.609 [2024-07-25 14:24:20.115996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acd320 (9): Bad file descriptor 00:22:50.609 [2024-07-25 14:24:20.126056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.570 [2024-07-25 14:24:21.133106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:51.570 [2024-07-25 14:24:21.133177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1acd320 with addr=10.0.0.2, port=4420 00:22:51.570 [2024-07-25 14:24:21.133207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1acd320 is same with the state(5) to be set 00:22:51.570 [2024-07-25 14:24:21.133263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acd320 (9): Bad file descriptor 00:22:51.570 [2024-07-25 14:24:21.133721] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:51.570 [2024-07-25 14:24:21.133772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:51.570 [2024-07-25 14:24:21.133792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:51.570 [2024-07-25 14:24:21.133812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:51.570 [2024-07-25 14:24:21.133848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.570 [2024-07-25 14:24:21.133867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:51.570 14:24:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.502 [2024-07-25 14:24:22.136370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:52.502 [2024-07-25 14:24:22.136422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:52.502 [2024-07-25 14:24:22.136437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:52.502 [2024-07-25 14:24:22.136450] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:52.502 [2024-07-25 14:24:22.136487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:52.502 [2024-07-25 14:24:22.136532] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:52.502 [2024-07-25 14:24:22.136604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.502 [2024-07-25 14:24:22.136628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.502 [2024-07-25 14:24:22.136649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.502 [2024-07-25 14:24:22.136662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.502 [2024-07-25 14:24:22.136676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.502 [2024-07-25 14:24:22.136690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.502 [2024-07-25 14:24:22.136704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.502 [2024-07-25 14:24:22.136717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.502 [2024-07-25 14:24:22.136731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:52.502 [2024-07-25 14:24:22.136745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.502 [2024-07-25 14:24:22.136758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:52.502 [2024-07-25 14:24:22.136809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1acc780 (9): Bad file descriptor 00:22:52.502 [2024-07-25 14:24:22.137803] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:52.502 [2024-07-25 14:24:22.137824] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.502 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:52.760 14:24:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:53.691 14:24:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.622 [2024-07-25 14:24:24.188767] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:54.622 [2024-07-25 14:24:24.188793] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:54.622 [2024-07-25 14:24:24.188816] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:54.880 [2024-07-25 14:24:24.316268] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:54.880 14:24:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:54.880 [2024-07-25 14:24:24.419978] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:54.880 [2024-07-25 14:24:24.420025] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:54.880 [2024-07-25 14:24:24.420079] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:54.880 [2024-07-25 14:24:24.420104] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:54.880 [2024-07-25 14:24:24.420116] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:54.880 [2024-07-25 14:24:24.427331] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1af00d0 was disconnected and freed. delete nvme_qpair. 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 994295 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 994295 ']' 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 994295 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994295 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994295' 00:22:55.813 killing process with pid 994295 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 994295 00:22:55.813 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 994295 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.071 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.071 rmmod nvme_tcp 00:22:56.329 rmmod nvme_fabrics 00:22:56.329 rmmod nvme_keyring 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 994262 ']' 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 994262 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 994262 ']' 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 994262 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994262 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994262' 00:22:56.329 killing process with pid 994262 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 994262 00:22:56.329 14:24:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 994262 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.587 14:24:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.490 00:22:58.490 real 0m17.857s 00:22:58.490 user 0m25.685s 00:22:58.490 sys 0m3.144s 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.490 ************************************ 00:22:58.490 END TEST nvmf_discovery_remove_ifc 00:22:58.490 ************************************ 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:58.490 14:24:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:58.749 ************************************ 00:22:58.749 START TEST nvmf_identify_kernel_target 00:22:58.749 ************************************ 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:58.749 * Looking for test storage... 00:22:58.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.749 14:24:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:00.650 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:00.650 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:00.650 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:00.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:00.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:00.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:23:00.651 00:23:00.651 --- 10.0.0.2 ping statistics --- 00:23:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.651 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:23:00.651 00:23:00.651 --- 10.0.0.1 ping statistics --- 00:23:00.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.651 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:00.651 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:00.910 14:24:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:01.846 Waiting for block devices as requested 00:23:01.846 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:02.104 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:02.104 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:02.363 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:02.363 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:02.363 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:02.363 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:02.624 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:02.624 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:02.624 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:02.884 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:02.884 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:02.884 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:02.884 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:03.143 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:03.143 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:03.143 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:03.402 No valid GPT data, bailing 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:03.402 14:24:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:03.402 00:23:03.402 Discovery Log Number of Records 2, Generation counter 2 00:23:03.402 =====Discovery Log Entry 0====== 00:23:03.402 trtype: tcp 00:23:03.402 adrfam: ipv4 00:23:03.402 subtype: current discovery subsystem 00:23:03.402 treq: not specified, sq flow control disable supported 00:23:03.402 portid: 1 00:23:03.402 trsvcid: 4420 00:23:03.402 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:03.402 traddr: 10.0.0.1 00:23:03.402 eflags: none 00:23:03.402 sectype: none 00:23:03.402 =====Discovery Log Entry 1====== 00:23:03.402 trtype: tcp 00:23:03.402 adrfam: ipv4 00:23:03.402 subtype: nvme subsystem 00:23:03.402 treq: not specified, sq flow control disable supported 00:23:03.402 portid: 1 00:23:03.402 trsvcid: 4420 00:23:03.402 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:03.402 traddr: 10.0.0.1 00:23:03.402 eflags: none 00:23:03.402 sectype: none 00:23:03.402 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:03.402 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:03.663 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.663 ===================================================== 00:23:03.663 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:03.663 ===================================================== 00:23:03.663 Controller Capabilities/Features 00:23:03.663 ================================ 00:23:03.663 Vendor ID: 0000 00:23:03.663 Subsystem Vendor ID: 0000 00:23:03.663 Serial Number: 64a1dda71cb2f135ff37 00:23:03.663 Model Number: Linux 00:23:03.663 Firmware Version: 6.7.0-68 00:23:03.663 Recommended Arb Burst: 0 00:23:03.663 IEEE OUI Identifier: 00 00 00 00:23:03.663 Multi-path I/O 00:23:03.663 May have multiple subsystem ports: No 00:23:03.663 May have multiple controllers: No 00:23:03.663 Associated with SR-IOV VF: No 00:23:03.663 Max Data Transfer Size: Unlimited 00:23:03.663 Max Number of Namespaces: 0 00:23:03.663 Max Number of I/O Queues: 1024 00:23:03.663 NVMe Specification Version (VS): 1.3 00:23:03.663 NVMe Specification Version (Identify): 1.3 00:23:03.663 Maximum Queue Entries: 1024 00:23:03.663 Contiguous Queues Required: No 00:23:03.663 Arbitration Mechanisms Supported 00:23:03.663 Weighted Round Robin: Not Supported 00:23:03.663 Vendor Specific: Not Supported 00:23:03.663 Reset Timeout: 7500 ms 00:23:03.663 Doorbell Stride: 4 bytes 00:23:03.663 NVM Subsystem Reset: Not Supported 00:23:03.663 Command Sets Supported 00:23:03.663 NVM Command Set: Supported 00:23:03.663 Boot Partition: Not Supported 00:23:03.663 Memory Page Size Minimum: 4096 bytes 00:23:03.663 Memory Page Size Maximum: 4096 bytes 00:23:03.663 Persistent Memory Region: Not Supported 00:23:03.663 Optional Asynchronous Events Supported 00:23:03.663 Namespace Attribute Notices: Not Supported 00:23:03.663 Firmware Activation Notices: Not Supported 00:23:03.663 ANA Change Notices: Not Supported 00:23:03.663 PLE Aggregate Log Change Notices: Not Supported 00:23:03.663 LBA Status Info Alert Notices: Not Supported 00:23:03.663 EGE Aggregate Log Change Notices: Not Supported 00:23:03.663 Normal NVM Subsystem Shutdown event: Not Supported 00:23:03.663 Zone Descriptor Change Notices: Not Supported 00:23:03.663 Discovery Log Change Notices: Supported 00:23:03.663 Controller Attributes 00:23:03.663 128-bit Host Identifier: Not Supported 00:23:03.663 Non-Operational Permissive Mode: Not Supported 00:23:03.663 NVM Sets: Not Supported 00:23:03.663 Read Recovery Levels: Not Supported 00:23:03.663 Endurance Groups: Not Supported 00:23:03.663 Predictable Latency Mode: Not Supported 00:23:03.663 Traffic Based Keep ALive: Not Supported 00:23:03.663 Namespace Granularity: Not Supported 00:23:03.664 SQ Associations: Not Supported 00:23:03.664 UUID List: Not Supported 00:23:03.664 Multi-Domain Subsystem: Not Supported 00:23:03.664 Fixed Capacity Management: Not Supported 00:23:03.664 Variable Capacity Management: Not Supported 00:23:03.664 Delete Endurance Group: Not Supported 00:23:03.664 Delete NVM Set: Not Supported 00:23:03.664 Extended LBA Formats Supported: Not Supported 00:23:03.664 Flexible Data Placement Supported: Not Supported 00:23:03.664 00:23:03.664 Controller Memory Buffer Support 00:23:03.664 ================================ 00:23:03.664 Supported: No 00:23:03.664 00:23:03.664 Persistent Memory Region Support 00:23:03.664 ================================ 00:23:03.664 Supported: No 00:23:03.664 00:23:03.664 Admin Command Set Attributes 00:23:03.664 ============================ 00:23:03.664 Security Send/Receive: Not Supported 00:23:03.664 Format NVM: Not Supported 00:23:03.664 Firmware Activate/Download: Not Supported 00:23:03.664 Namespace Management: Not Supported 00:23:03.664 Device Self-Test: Not Supported 00:23:03.664 Directives: Not Supported 00:23:03.664 NVMe-MI: Not Supported 00:23:03.664 Virtualization Management: Not Supported 00:23:03.664 Doorbell Buffer Config: Not Supported 00:23:03.664 Get LBA Status Capability: Not Supported 00:23:03.664 Command & Feature Lockdown Capability: Not Supported 00:23:03.664 Abort Command Limit: 1 00:23:03.664 Async Event Request Limit: 1 00:23:03.664 Number of Firmware Slots: N/A 00:23:03.664 Firmware Slot 1 Read-Only: N/A 00:23:03.664 Firmware Activation Without Reset: N/A 00:23:03.664 Multiple Update Detection Support: N/A 00:23:03.664 Firmware Update Granularity: No Information Provided 00:23:03.664 Per-Namespace SMART Log: No 00:23:03.664 Asymmetric Namespace Access Log Page: Not Supported 00:23:03.664 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:03.664 Command Effects Log Page: Not Supported 00:23:03.664 Get Log Page Extended Data: Supported 00:23:03.664 Telemetry Log Pages: Not Supported 00:23:03.664 Persistent Event Log Pages: Not Supported 00:23:03.664 Supported Log Pages Log Page: May Support 00:23:03.664 Commands Supported & Effects Log Page: Not Supported 00:23:03.664 Feature Identifiers & Effects Log Page:May Support 00:23:03.664 NVMe-MI Commands & Effects Log Page: May Support 00:23:03.664 Data Area 4 for Telemetry Log: Not Supported 00:23:03.664 Error Log Page Entries Supported: 1 00:23:03.664 Keep Alive: Not Supported 00:23:03.664 00:23:03.664 NVM Command Set Attributes 00:23:03.664 ========================== 00:23:03.664 Submission Queue Entry Size 00:23:03.664 Max: 1 00:23:03.664 Min: 1 00:23:03.664 Completion Queue Entry Size 00:23:03.664 Max: 1 00:23:03.664 Min: 1 00:23:03.664 Number of Namespaces: 0 00:23:03.664 Compare Command: Not Supported 00:23:03.664 Write Uncorrectable Command: Not Supported 00:23:03.664 Dataset Management Command: Not Supported 00:23:03.664 Write Zeroes Command: Not Supported 00:23:03.664 Set Features Save Field: Not Supported 00:23:03.664 Reservations: Not Supported 00:23:03.664 Timestamp: Not Supported 00:23:03.664 Copy: Not Supported 00:23:03.664 Volatile Write Cache: Not Present 00:23:03.664 Atomic Write Unit (Normal): 1 00:23:03.664 Atomic Write Unit (PFail): 1 00:23:03.664 Atomic Compare & Write Unit: 1 00:23:03.664 Fused Compare & Write: Not Supported 00:23:03.664 Scatter-Gather List 00:23:03.664 SGL Command Set: Supported 00:23:03.664 SGL Keyed: Not Supported 00:23:03.664 SGL Bit Bucket Descriptor: Not Supported 00:23:03.664 SGL Metadata Pointer: Not Supported 00:23:03.664 Oversized SGL: Not Supported 00:23:03.664 SGL Metadata Address: Not Supported 00:23:03.664 SGL Offset: Supported 00:23:03.664 Transport SGL Data Block: Not Supported 00:23:03.664 Replay Protected Memory Block: Not Supported 00:23:03.664 00:23:03.664 Firmware Slot Information 00:23:03.664 ========================= 00:23:03.664 Active slot: 0 00:23:03.664 00:23:03.664 00:23:03.664 Error Log 00:23:03.664 ========= 00:23:03.664 00:23:03.664 Active Namespaces 00:23:03.664 ================= 00:23:03.664 Discovery Log Page 00:23:03.664 ================== 00:23:03.664 Generation Counter: 2 00:23:03.664 Number of Records: 2 00:23:03.664 Record Format: 0 00:23:03.664 00:23:03.664 Discovery Log Entry 0 00:23:03.664 ---------------------- 00:23:03.664 Transport Type: 3 (TCP) 00:23:03.664 Address Family: 1 (IPv4) 00:23:03.664 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:03.664 Entry Flags: 00:23:03.664 Duplicate Returned Information: 0 00:23:03.664 Explicit Persistent Connection Support for Discovery: 0 00:23:03.664 Transport Requirements: 00:23:03.664 Secure Channel: Not Specified 00:23:03.664 Port ID: 1 (0x0001) 00:23:03.664 Controller ID: 65535 (0xffff) 00:23:03.664 Admin Max SQ Size: 32 00:23:03.664 Transport Service Identifier: 4420 00:23:03.664 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:03.664 Transport Address: 10.0.0.1 00:23:03.664 Discovery Log Entry 1 00:23:03.664 ---------------------- 00:23:03.664 Transport Type: 3 (TCP) 00:23:03.664 Address Family: 1 (IPv4) 00:23:03.664 Subsystem Type: 2 (NVM Subsystem) 00:23:03.664 Entry Flags: 00:23:03.664 Duplicate Returned Information: 0 00:23:03.664 Explicit Persistent Connection Support for Discovery: 0 00:23:03.664 Transport Requirements: 00:23:03.664 Secure Channel: Not Specified 00:23:03.664 Port ID: 1 (0x0001) 00:23:03.664 Controller ID: 65535 (0xffff) 00:23:03.664 Admin Max SQ Size: 32 00:23:03.664 Transport Service Identifier: 4420 00:23:03.664 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:03.664 Transport Address: 10.0.0.1 00:23:03.664 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:03.664 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.664 get_feature(0x01) failed 00:23:03.664 get_feature(0x02) failed 00:23:03.664 get_feature(0x04) failed 00:23:03.664 ===================================================== 00:23:03.664 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:03.664 ===================================================== 00:23:03.664 Controller Capabilities/Features 00:23:03.664 ================================ 00:23:03.664 Vendor ID: 0000 00:23:03.664 Subsystem Vendor ID: 0000 00:23:03.664 Serial Number: fd73c234782242158e20 00:23:03.664 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:03.664 Firmware Version: 6.7.0-68 00:23:03.664 Recommended Arb Burst: 6 00:23:03.664 IEEE OUI Identifier: 00 00 00 00:23:03.664 Multi-path I/O 00:23:03.664 May have multiple subsystem ports: Yes 00:23:03.664 May have multiple controllers: Yes 00:23:03.664 Associated with SR-IOV VF: No 00:23:03.664 Max Data Transfer Size: Unlimited 00:23:03.664 Max Number of Namespaces: 1024 00:23:03.664 Max Number of I/O Queues: 128 00:23:03.664 NVMe Specification Version (VS): 1.3 00:23:03.664 NVMe Specification Version (Identify): 1.3 00:23:03.664 Maximum Queue Entries: 1024 00:23:03.664 Contiguous Queues Required: No 00:23:03.664 Arbitration Mechanisms Supported 00:23:03.664 Weighted Round Robin: Not Supported 00:23:03.664 Vendor Specific: Not Supported 00:23:03.664 Reset Timeout: 7500 ms 00:23:03.664 Doorbell Stride: 4 bytes 00:23:03.664 NVM Subsystem Reset: Not Supported 00:23:03.664 Command Sets Supported 00:23:03.664 NVM Command Set: Supported 00:23:03.664 Boot Partition: Not Supported 00:23:03.664 Memory Page Size Minimum: 4096 bytes 00:23:03.664 Memory Page Size Maximum: 4096 bytes 00:23:03.664 Persistent Memory Region: Not Supported 00:23:03.664 Optional Asynchronous Events Supported 00:23:03.664 Namespace Attribute Notices: Supported 00:23:03.664 Firmware Activation Notices: Not Supported 00:23:03.664 ANA Change Notices: Supported 00:23:03.664 PLE Aggregate Log Change Notices: Not Supported 00:23:03.664 LBA Status Info Alert Notices: Not Supported 00:23:03.664 EGE Aggregate Log Change Notices: Not Supported 00:23:03.664 Normal NVM Subsystem Shutdown event: Not Supported 00:23:03.665 Zone Descriptor Change Notices: Not Supported 00:23:03.665 Discovery Log Change Notices: Not Supported 00:23:03.665 Controller Attributes 00:23:03.665 128-bit Host Identifier: Supported 00:23:03.665 Non-Operational Permissive Mode: Not Supported 00:23:03.665 NVM Sets: Not Supported 00:23:03.665 Read Recovery Levels: Not Supported 00:23:03.665 Endurance Groups: Not Supported 00:23:03.665 Predictable Latency Mode: Not Supported 00:23:03.665 Traffic Based Keep ALive: Supported 00:23:03.665 Namespace Granularity: Not Supported 00:23:03.665 SQ Associations: Not Supported 00:23:03.665 UUID List: Not Supported 00:23:03.665 Multi-Domain Subsystem: Not Supported 00:23:03.665 Fixed Capacity Management: Not Supported 00:23:03.665 Variable Capacity Management: Not Supported 00:23:03.665 Delete Endurance Group: Not Supported 00:23:03.665 Delete NVM Set: Not Supported 00:23:03.665 Extended LBA Formats Supported: Not Supported 00:23:03.665 Flexible Data Placement Supported: Not Supported 00:23:03.665 00:23:03.665 Controller Memory Buffer Support 00:23:03.665 ================================ 00:23:03.665 Supported: No 00:23:03.665 00:23:03.665 Persistent Memory Region Support 00:23:03.665 ================================ 00:23:03.665 Supported: No 00:23:03.665 00:23:03.665 Admin Command Set Attributes 00:23:03.665 ============================ 00:23:03.665 Security Send/Receive: Not Supported 00:23:03.665 Format NVM: Not Supported 00:23:03.665 Firmware Activate/Download: Not Supported 00:23:03.665 Namespace Management: Not Supported 00:23:03.665 Device Self-Test: Not Supported 00:23:03.665 Directives: Not Supported 00:23:03.665 NVMe-MI: Not Supported 00:23:03.665 Virtualization Management: Not Supported 00:23:03.665 Doorbell Buffer Config: Not Supported 00:23:03.665 Get LBA Status Capability: Not Supported 00:23:03.665 Command & Feature Lockdown Capability: Not Supported 00:23:03.665 Abort Command Limit: 4 00:23:03.665 Async Event Request Limit: 4 00:23:03.665 Number of Firmware Slots: N/A 00:23:03.665 Firmware Slot 1 Read-Only: N/A 00:23:03.665 Firmware Activation Without Reset: N/A 00:23:03.665 Multiple Update Detection Support: N/A 00:23:03.665 Firmware Update Granularity: No Information Provided 00:23:03.665 Per-Namespace SMART Log: Yes 00:23:03.665 Asymmetric Namespace Access Log Page: Supported 00:23:03.665 ANA Transition Time : 10 sec 00:23:03.665 00:23:03.665 Asymmetric Namespace Access Capabilities 00:23:03.665 ANA Optimized State : Supported 00:23:03.665 ANA Non-Optimized State : Supported 00:23:03.665 ANA Inaccessible State : Supported 00:23:03.665 ANA Persistent Loss State : Supported 00:23:03.665 ANA Change State : Supported 00:23:03.665 ANAGRPID is not changed : No 00:23:03.665 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:03.665 00:23:03.665 ANA Group Identifier Maximum : 128 00:23:03.665 Number of ANA Group Identifiers : 128 00:23:03.665 Max Number of Allowed Namespaces : 1024 00:23:03.665 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:03.665 Command Effects Log Page: Supported 00:23:03.665 Get Log Page Extended Data: Supported 00:23:03.665 Telemetry Log Pages: Not Supported 00:23:03.665 Persistent Event Log Pages: Not Supported 00:23:03.665 Supported Log Pages Log Page: May Support 00:23:03.665 Commands Supported & Effects Log Page: Not Supported 00:23:03.665 Feature Identifiers & Effects Log Page:May Support 00:23:03.665 NVMe-MI Commands & Effects Log Page: May Support 00:23:03.665 Data Area 4 for Telemetry Log: Not Supported 00:23:03.665 Error Log Page Entries Supported: 128 00:23:03.665 Keep Alive: Supported 00:23:03.665 Keep Alive Granularity: 1000 ms 00:23:03.665 00:23:03.665 NVM Command Set Attributes 00:23:03.665 ========================== 00:23:03.665 Submission Queue Entry Size 00:23:03.665 Max: 64 00:23:03.665 Min: 64 00:23:03.665 Completion Queue Entry Size 00:23:03.665 Max: 16 00:23:03.665 Min: 16 00:23:03.665 Number of Namespaces: 1024 00:23:03.665 Compare Command: Not Supported 00:23:03.665 Write Uncorrectable Command: Not Supported 00:23:03.665 Dataset Management Command: Supported 00:23:03.665 Write Zeroes Command: Supported 00:23:03.665 Set Features Save Field: Not Supported 00:23:03.665 Reservations: Not Supported 00:23:03.665 Timestamp: Not Supported 00:23:03.665 Copy: Not Supported 00:23:03.665 Volatile Write Cache: Present 00:23:03.665 Atomic Write Unit (Normal): 1 00:23:03.665 Atomic Write Unit (PFail): 1 00:23:03.665 Atomic Compare & Write Unit: 1 00:23:03.665 Fused Compare & Write: Not Supported 00:23:03.665 Scatter-Gather List 00:23:03.665 SGL Command Set: Supported 00:23:03.665 SGL Keyed: Not Supported 00:23:03.665 SGL Bit Bucket Descriptor: Not Supported 00:23:03.665 SGL Metadata Pointer: Not Supported 00:23:03.665 Oversized SGL: Not Supported 00:23:03.665 SGL Metadata Address: Not Supported 00:23:03.665 SGL Offset: Supported 00:23:03.665 Transport SGL Data Block: Not Supported 00:23:03.665 Replay Protected Memory Block: Not Supported 00:23:03.665 00:23:03.665 Firmware Slot Information 00:23:03.665 ========================= 00:23:03.665 Active slot: 0 00:23:03.665 00:23:03.665 Asymmetric Namespace Access 00:23:03.665 =========================== 00:23:03.665 Change Count : 0 00:23:03.665 Number of ANA Group Descriptors : 1 00:23:03.665 ANA Group Descriptor : 0 00:23:03.665 ANA Group ID : 1 00:23:03.665 Number of NSID Values : 1 00:23:03.665 Change Count : 0 00:23:03.665 ANA State : 1 00:23:03.665 Namespace Identifier : 1 00:23:03.665 00:23:03.665 Commands Supported and Effects 00:23:03.665 ============================== 00:23:03.665 Admin Commands 00:23:03.665 -------------- 00:23:03.665 Get Log Page (02h): Supported 00:23:03.665 Identify (06h): Supported 00:23:03.665 Abort (08h): Supported 00:23:03.665 Set Features (09h): Supported 00:23:03.665 Get Features (0Ah): Supported 00:23:03.665 Asynchronous Event Request (0Ch): Supported 00:23:03.665 Keep Alive (18h): Supported 00:23:03.666 I/O Commands 00:23:03.666 ------------ 00:23:03.666 Flush (00h): Supported 00:23:03.666 Write (01h): Supported LBA-Change 00:23:03.666 Read (02h): Supported 00:23:03.666 Write Zeroes (08h): Supported LBA-Change 00:23:03.666 Dataset Management (09h): Supported 00:23:03.666 00:23:03.666 Error Log 00:23:03.666 ========= 00:23:03.666 Entry: 0 00:23:03.666 Error Count: 0x3 00:23:03.666 Submission Queue Id: 0x0 00:23:03.666 Command Id: 0x5 00:23:03.666 Phase Bit: 0 00:23:03.666 Status Code: 0x2 00:23:03.666 Status Code Type: 0x0 00:23:03.666 Do Not Retry: 1 00:23:03.666 Error Location: 0x28 00:23:03.666 LBA: 0x0 00:23:03.666 Namespace: 0x0 00:23:03.666 Vendor Log Page: 0x0 00:23:03.666 ----------- 00:23:03.666 Entry: 1 00:23:03.666 Error Count: 0x2 00:23:03.666 Submission Queue Id: 0x0 00:23:03.666 Command Id: 0x5 00:23:03.666 Phase Bit: 0 00:23:03.666 Status Code: 0x2 00:23:03.666 Status Code Type: 0x0 00:23:03.666 Do Not Retry: 1 00:23:03.666 Error Location: 0x28 00:23:03.666 LBA: 0x0 00:23:03.666 Namespace: 0x0 00:23:03.666 Vendor Log Page: 0x0 00:23:03.666 ----------- 00:23:03.666 Entry: 2 00:23:03.666 Error Count: 0x1 00:23:03.666 Submission Queue Id: 0x0 00:23:03.666 Command Id: 0x4 00:23:03.666 Phase Bit: 0 00:23:03.666 Status Code: 0x2 00:23:03.666 Status Code Type: 0x0 00:23:03.666 Do Not Retry: 1 00:23:03.666 Error Location: 0x28 00:23:03.666 LBA: 0x0 00:23:03.666 Namespace: 0x0 00:23:03.666 Vendor Log Page: 0x0 00:23:03.666 00:23:03.666 Number of Queues 00:23:03.666 ================ 00:23:03.666 Number of I/O Submission Queues: 128 00:23:03.666 Number of I/O Completion Queues: 128 00:23:03.666 00:23:03.666 ZNS Specific Controller Data 00:23:03.666 ============================ 00:23:03.666 Zone Append Size Limit: 0 00:23:03.666 00:23:03.666 00:23:03.666 Active Namespaces 00:23:03.666 ================= 00:23:03.666 get_feature(0x05) failed 00:23:03.666 Namespace ID:1 00:23:03.666 Command Set Identifier: NVM (00h) 00:23:03.666 Deallocate: Supported 00:23:03.666 Deallocated/Unwritten Error: Not Supported 00:23:03.666 Deallocated Read Value: Unknown 00:23:03.666 Deallocate in Write Zeroes: Not Supported 00:23:03.666 Deallocated Guard Field: 0xFFFF 00:23:03.666 Flush: Supported 00:23:03.666 Reservation: Not Supported 00:23:03.666 Namespace Sharing Capabilities: Multiple Controllers 00:23:03.666 Size (in LBAs): 1953525168 (931GiB) 00:23:03.666 Capacity (in LBAs): 1953525168 (931GiB) 00:23:03.666 Utilization (in LBAs): 1953525168 (931GiB) 00:23:03.666 UUID: c3c7d747-3603-4f51-ba65-7bd503d429a4 00:23:03.666 Thin Provisioning: Not Supported 00:23:03.666 Per-NS Atomic Units: Yes 00:23:03.666 Atomic Boundary Size (Normal): 0 00:23:03.666 Atomic Boundary Size (PFail): 0 00:23:03.666 Atomic Boundary Offset: 0 00:23:03.666 NGUID/EUI64 Never Reused: No 00:23:03.666 ANA group ID: 1 00:23:03.666 Namespace Write Protected: No 00:23:03.666 Number of LBA Formats: 1 00:23:03.666 Current LBA Format: LBA Format #00 00:23:03.666 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:03.666 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.666 rmmod nvme_tcp 00:23:03.666 rmmod nvme_fabrics 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.666 14:24:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:06.199 14:24:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:07.133 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.133 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:07.133 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:08.067 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:08.067 00:23:08.067 real 0m9.553s 00:23:08.067 user 0m2.009s 00:23:08.067 sys 0m3.430s 00:23:08.067 14:24:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:08.067 14:24:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.067 ************************************ 00:23:08.067 END TEST nvmf_identify_kernel_target 00:23:08.067 ************************************ 00:23:08.325 14:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.326 ************************************ 00:23:08.326 START TEST nvmf_auth_host 00:23:08.326 ************************************ 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:08.326 * Looking for test storage... 00:23:08.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:08.326 14:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:10.857 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:10.857 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.857 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:10.858 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:10.858 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.858 14:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:23:10.858 00:23:10.858 --- 10.0.0.2 ping statistics --- 00:23:10.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.858 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:23:10.858 00:23:10.858 --- 10.0.0.1 ping statistics --- 00:23:10.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.858 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1001479 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1001479 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1001479 ']' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ed5c7dc91bf25fe00fa1051e9d85cd0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.8VH 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ed5c7dc91bf25fe00fa1051e9d85cd0 0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ed5c7dc91bf25fe00fa1051e9d85cd0 0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ed5c7dc91bf25fe00fa1051e9d85cd0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.8VH 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.8VH 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8VH 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:10.858 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=73830cedc0886653146bacececbe831e3a8b2b032731d219c03b7c331a5e428c 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MHu 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 73830cedc0886653146bacececbe831e3a8b2b032731d219c03b7c331a5e428c 3 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 73830cedc0886653146bacececbe831e3a8b2b032731d219c03b7c331a5e428c 3 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=73830cedc0886653146bacececbe831e3a8b2b032731d219c03b7c331a5e428c 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MHu 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MHu 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.MHu 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d1e659dc5c519008526817a81400b40a714fbcaeaad88136 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.EyN 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d1e659dc5c519008526817a81400b40a714fbcaeaad88136 0 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d1e659dc5c519008526817a81400b40a714fbcaeaad88136 0 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d1e659dc5c519008526817a81400b40a714fbcaeaad88136 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.EyN 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.EyN 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.EyN 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5d15d8db44df1c723560267ebcff697d489e47de82033bbc 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.pOj 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5d15d8db44df1c723560267ebcff697d489e47de82033bbc 2 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5d15d8db44df1c723560267ebcff697d489e47de82033bbc 2 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5d15d8db44df1c723560267ebcff697d489e47de82033bbc 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.pOj 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.pOj 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.pOj 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a804973e9ebffc7291ac276c4e5d7dff 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Jcv 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a804973e9ebffc7291ac276c4e5d7dff 1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a804973e9ebffc7291ac276c4e5d7dff 1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a804973e9ebffc7291ac276c4e5d7dff 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Jcv 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Jcv 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Jcv 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3024547b8c337a3215daf9c6f28a453e 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.OqN 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3024547b8c337a3215daf9c6f28a453e 1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3024547b8c337a3215daf9c6f28a453e 1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.117 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3024547b8c337a3215daf9c6f28a453e 00:23:11.118 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:11.118 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.118 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.OqN 00:23:11.118 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.OqN 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OqN 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=272583098e7c43ebda79d041844eeb813c18eb8dabfbfc79 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Tpo 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 272583098e7c43ebda79d041844eeb813c18eb8dabfbfc79 2 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 272583098e7c43ebda79d041844eeb813c18eb8dabfbfc79 2 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=272583098e7c43ebda79d041844eeb813c18eb8dabfbfc79 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Tpo 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Tpo 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Tpo 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9392ae20688b6f4bba04ec13980f830e 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.w8r 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9392ae20688b6f4bba04ec13980f830e 0 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9392ae20688b6f4bba04ec13980f830e 0 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9392ae20688b6f4bba04ec13980f830e 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.w8r 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.w8r 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.w8r 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f961bc3fb3662a26cef159ff55addf34560e53f4024b87091936e95412652c25 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lGB 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f961bc3fb3662a26cef159ff55addf34560e53f4024b87091936e95412652c25 3 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f961bc3fb3662a26cef159ff55addf34560e53f4024b87091936e95412652c25 3 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f961bc3fb3662a26cef159ff55addf34560e53f4024b87091936e95412652c25 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lGB 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lGB 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lGB 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:11.380 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1001479 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1001479 ']' 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:11.381 14:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8VH 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.MHu ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.MHu 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.EyN 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.pOj ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pOj 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Jcv 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OqN ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OqN 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Tpo 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.w8r ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.w8r 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.684 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lGB 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:11.685 14:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:12.622 Waiting for block devices as requested 00:23:12.622 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:12.883 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:12.883 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.143 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.143 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.143 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.404 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.404 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:13.404 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:13.404 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:13.663 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:13.663 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:13.663 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:13.663 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:13.932 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:13.932 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:13.932 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:14.503 No valid GPT data, bailing 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:14.503 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:14.504 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:14.504 14:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:14.504 00:23:14.504 Discovery Log Number of Records 2, Generation counter 2 00:23:14.504 =====Discovery Log Entry 0====== 00:23:14.504 trtype: tcp 00:23:14.504 adrfam: ipv4 00:23:14.504 subtype: current discovery subsystem 00:23:14.504 treq: not specified, sq flow control disable supported 00:23:14.504 portid: 1 00:23:14.504 trsvcid: 4420 00:23:14.504 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:14.504 traddr: 10.0.0.1 00:23:14.504 eflags: none 00:23:14.504 sectype: none 00:23:14.504 =====Discovery Log Entry 1====== 00:23:14.504 trtype: tcp 00:23:14.504 adrfam: ipv4 00:23:14.504 subtype: nvme subsystem 00:23:14.504 treq: not specified, sq flow control disable supported 00:23:14.504 portid: 1 00:23:14.504 trsvcid: 4420 00:23:14.504 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:14.504 traddr: 10.0.0.1 00:23:14.504 eflags: none 00:23:14.504 sectype: none 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.504 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.763 nvme0n1 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:14.763 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.764 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 nvme0n1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.022 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.280 nvme0n1 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.280 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.281 nvme0n1 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.281 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.540 14:24:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.540 nvme0n1 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.540 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.541 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.801 nvme0n1 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.801 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.802 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.061 nvme0n1 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.061 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.062 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 nvme0n1 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.322 14:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.583 nvme0n1 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.583 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.584 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.845 nvme0n1 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.845 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.104 nvme0n1 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.104 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.363 nvme0n1 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.363 14:24:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.623 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 nvme0n1 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.884 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.143 nvme0n1 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:18.143 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.144 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.403 nvme0n1 00:23:18.403 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.403 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.403 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.403 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.403 14:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.403 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.662 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.920 nvme0n1 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.920 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.921 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 nvme0n1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.487 14:24:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.057 nvme0n1 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.057 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.624 nvme0n1 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.624 14:24:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.624 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.625 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 nvme0n1 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.195 14:24:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 nvme0n1 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.454 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.712 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.648 nvme0n1 00:23:22.648 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.648 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.648 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.648 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.648 14:24:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.649 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.586 nvme0n1 00:23:23.586 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.586 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.586 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.587 14:24:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.524 nvme0n1 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:24.524 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.525 14:24:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.459 nvme0n1 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.459 14:24:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.397 nvme0n1 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.397 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.398 nvme0n1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.398 14:24:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.658 nvme0n1 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.658 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.659 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.919 nvme0n1 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.919 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.920 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.920 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.920 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 nvme0n1 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 nvme0n1 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.178 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.436 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.437 14:24:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.437 nvme0n1 00:23:27.437 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.437 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.437 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.437 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.437 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.695 nvme0n1 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.695 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 nvme0n1 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.957 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.251 nvme0n1 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.251 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.509 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.510 14:24:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.510 nvme0n1 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.510 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.768 nvme0n1 00:23:28.768 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.768 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.768 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.768 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.768 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.028 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 nvme0n1 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.288 14:24:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 nvme0n1 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.548 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.549 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.808 nvme0n1 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.808 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.065 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.066 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 nvme0n1 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.323 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.324 14:24:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.893 nvme0n1 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.893 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.894 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 nvme0n1 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.462 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.463 14:25:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.031 nvme0n1 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:32.031 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.032 14:25:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.598 nvme0n1 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.598 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.599 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.168 nvme0n1 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.168 14:25:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.102 nvme0n1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.102 14:25:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.041 nvme0n1 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.041 14:25:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.980 nvme0n1 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.980 14:25:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.549 nvme0n1 00:23:36.549 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.808 14:25:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 nvme0n1 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 nvme0n1 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.745 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.746 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.005 nvme0n1 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.005 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.265 nvme0n1 00:23:38.265 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.266 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.527 nvme0n1 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.527 14:25:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.527 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.528 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.788 nvme0n1 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.788 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.789 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.789 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.789 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.048 nvme0n1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.048 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.309 nvme0n1 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.309 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.570 nvme0n1 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.570 14:25:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.570 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.571 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.831 nvme0n1 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.831 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.832 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.093 nvme0n1 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.093 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.354 nvme0n1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.354 14:25:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.615 nvme0n1 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.615 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.874 nvme0n1 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.874 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.244 nvme0n1 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.244 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.503 14:25:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.763 nvme0n1 00:23:41.763 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.763 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.764 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.335 nvme0n1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.335 14:25:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.904 nvme0n1 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.904 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.905 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.475 nvme0n1 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.475 14:25:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.045 nvme0n1 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.045 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.614 nvme0n1 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.614 14:25:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmVkNWM3ZGM5MWJmMjVmZTAwZmExMDUxZTlkODVjZDBfZcsA: 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzM4MzBjZWRjMDg4NjY1MzE0NmJhY2VjZWNiZTgzMWUzYThiMmIwMzI3MzFkMjE5YzAzYjdjMzMxYTVlNDI4Y0JZOW0=: 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.614 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.572 nvme0n1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.573 14:25:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 nvme0n1 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YTgwNDk3M2U5ZWJmZmM3MjkxYWMyNzZjNGU1ZDdkZmYoNtvA: 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MzAyNDU0N2I4YzMzN2EzMjE1ZGFmOWM2ZjI4YTQ1M2XEmIuV: 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.522 14:25:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.089 nvme0n1 00:23:47.089 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.089 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.089 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.089 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.089 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MjcyNTgzMDk4ZTdjNDNlYmRhNzlkMDQxODQ0ZWViODEzYzE4ZWI4ZGFiZmJmYzc51uF0TQ==: 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTM5MmFlMjA2ODhiNmY0YmJhMDRlYzEzOTgwZjgzMGVjMxEe: 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.347 14:25:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.287 nvme0n1 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Zjk2MWJjM2ZiMzY2MmEyNmNlZjE1OWZmNTVhZGRmMzQ1NjBlNTNmNDAyNGI4NzA5MTkzNmU5NTQxMjY1MmMyNXNTO98=: 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.287 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.288 14:25:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 nvme0n1 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDFlNjU5ZGM1YzUxOTAwODUyNjgxN2E4MTQwMGI0MGE3MTRmYmNhZWFhZDg4MTM2xD0GzQ==: 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NWQxNWQ4ZGI0NGRmMWM3MjM1NjAyNjdlYmNmZjY5N2Q0ODllNDdkZTgyMDMzYmJjShZ+iQ==: 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 request: 00:23:49.227 { 00:23:49.227 "name": "nvme0", 00:23:49.227 "trtype": "tcp", 00:23:49.227 "traddr": "10.0.0.1", 00:23:49.227 "adrfam": "ipv4", 00:23:49.227 "trsvcid": "4420", 00:23:49.227 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.227 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.227 "prchk_reftag": false, 00:23:49.227 "prchk_guard": false, 00:23:49.227 "hdgst": false, 00:23:49.227 "ddgst": false, 00:23:49.227 "method": "bdev_nvme_attach_controller", 00:23:49.227 "req_id": 1 00:23:49.227 } 00:23:49.227 Got JSON-RPC error response 00:23:49.227 response: 00:23:49.227 { 00:23:49.227 "code": -5, 00:23:49.227 "message": "Input/output error" 00:23:49.227 } 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.227 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.228 request: 00:23:49.228 { 00:23:49.228 "name": "nvme0", 00:23:49.228 "trtype": "tcp", 00:23:49.228 "traddr": "10.0.0.1", 00:23:49.228 "adrfam": "ipv4", 00:23:49.228 "trsvcid": "4420", 00:23:49.228 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.228 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.228 "prchk_reftag": false, 00:23:49.228 "prchk_guard": false, 00:23:49.228 "hdgst": false, 00:23:49.228 "ddgst": false, 00:23:49.228 "dhchap_key": "key2", 00:23:49.228 "method": "bdev_nvme_attach_controller", 00:23:49.228 "req_id": 1 00:23:49.228 } 00:23:49.228 Got JSON-RPC error response 00:23:49.228 response: 00:23:49.228 { 00:23:49.228 "code": -5, 00:23:49.228 "message": "Input/output error" 00:23:49.228 } 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.228 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.487 request: 00:23:49.487 { 00:23:49.487 "name": "nvme0", 00:23:49.487 "trtype": "tcp", 00:23:49.487 "traddr": "10.0.0.1", 00:23:49.487 "adrfam": "ipv4", 00:23:49.487 "trsvcid": "4420", 00:23:49.487 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.487 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.487 "prchk_reftag": false, 00:23:49.487 "prchk_guard": false, 00:23:49.487 "hdgst": false, 00:23:49.487 "ddgst": false, 00:23:49.487 "dhchap_key": "key1", 00:23:49.487 "dhchap_ctrlr_key": "ckey2", 00:23:49.487 "method": "bdev_nvme_attach_controller", 00:23:49.487 "req_id": 1 00:23:49.487 } 00:23:49.487 Got JSON-RPC error response 00:23:49.487 response: 00:23:49.487 { 00:23:49.487 "code": -5, 00:23:49.487 "message": "Input/output error" 00:23:49.487 } 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.487 rmmod nvme_tcp 00:23:49.487 rmmod nvme_fabrics 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1001479 ']' 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1001479 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1001479 ']' 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1001479 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1001479 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1001479' 00:23:49.487 killing process with pid 1001479 00:23:49.487 14:25:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1001479 00:23:49.487 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1001479 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.746 14:25:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:51.651 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:51.911 14:25:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:52.848 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:52.848 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:52.848 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:52.848 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.108 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.108 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.108 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.108 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:53.108 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:54.049 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:54.049 14:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8VH /tmp/spdk.key-null.EyN /tmp/spdk.key-sha256.Jcv /tmp/spdk.key-sha384.Tpo /tmp/spdk.key-sha512.lGB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:54.049 14:25:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:54.982 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:54.982 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:55.241 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:55.241 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:55.241 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:55.241 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:55.241 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:55.241 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:55.241 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:55.241 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:55.241 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:55.241 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:55.241 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:55.241 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:55.241 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:55.241 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:55.241 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:55.241 00:23:55.241 real 0m47.082s 00:23:55.241 user 0m44.818s 00:23:55.241 sys 0m5.715s 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.241 ************************************ 00:23:55.241 END TEST nvmf_auth_host 00:23:55.241 ************************************ 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.241 ************************************ 00:23:55.241 START TEST nvmf_digest 00:23:55.241 ************************************ 00:23:55.241 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:55.500 * Looking for test storage... 00:23:55.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.500 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.501 14:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.407 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.407 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:57.407 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.408 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.408 14:25:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:57.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:23:57.408 00:23:57.408 --- 10.0.0.2 ping statistics --- 00:23:57.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.408 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:57.408 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:23:57.668 00:23:57.668 --- 10.0.0.1 ping statistics --- 00:23:57.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.668 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:57.668 ************************************ 00:23:57.668 START TEST nvmf_digest_clean 00:23:57.668 ************************************ 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1010662 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:57.668 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1010662 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1010662 ']' 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.669 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.669 [2024-07-25 14:25:27.161444] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:23:57.669 [2024-07-25 14:25:27.161533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.669 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.669 [2024-07-25 14:25:27.225986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.927 [2024-07-25 14:25:27.336611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.927 [2024-07-25 14:25:27.336692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.927 [2024-07-25 14:25:27.336722] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.927 [2024-07-25 14:25:27.336734] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.927 [2024-07-25 14:25:27.336744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.927 [2024-07-25 14:25:27.336772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.927 null0 00:23:57.927 [2024-07-25 14:25:27.503281] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.927 [2024-07-25 14:25:27.527546] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1010684 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1010684 /var/tmp/bperf.sock 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1010684 ']' 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:57.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.927 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.927 [2024-07-25 14:25:27.572278] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:23:57.927 [2024-07-25 14:25:27.572354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1010684 ] 00:23:58.185 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.185 [2024-07-25 14:25:27.630792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.185 [2024-07-25 14:25:27.738674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.185 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.185 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:23:58.185 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:58.185 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:58.185 14:25:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:58.751 14:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.751 14:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:59.320 nvme0n1 00:23:59.320 14:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:59.320 14:25:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:59.320 Running I/O for 2 seconds... 00:24:01.227 00:24:01.227 Latency(us) 00:24:01.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.227 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:01.227 nvme0n1 : 2.00 19466.51 76.04 0.00 0.00 6567.30 3131.16 15437.37 00:24:01.227 =================================================================================================================== 00:24:01.227 Total : 19466.51 76.04 0.00 0.00 6567.30 3131.16 15437.37 00:24:01.227 0 00:24:01.227 14:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:01.227 14:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:01.227 14:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:01.227 14:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:01.227 | select(.opcode=="crc32c") 00:24:01.227 | "\(.module_name) \(.executed)"' 00:24:01.227 14:25:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1010684 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1010684 ']' 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1010684 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:01.485 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010684 00:24:01.743 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:01.743 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:01.743 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010684' 00:24:01.743 killing process with pid 1010684 00:24:01.743 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1010684 00:24:01.743 Received shutdown signal, test time was about 2.000000 seconds 00:24:01.743 00:24:01.743 Latency(us) 00:24:01.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.743 =================================================================================================================== 00:24:01.743 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.743 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1010684 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1011094 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1011094 /var/tmp/bperf.sock 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1011094 ']' 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:02.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:02.001 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:02.001 [2024-07-25 14:25:31.472442] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:02.001 [2024-07-25 14:25:31.472517] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011094 ] 00:24:02.001 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:02.001 Zero copy mechanism will not be used. 00:24:02.001 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.001 [2024-07-25 14:25:31.528970] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.001 [2024-07-25 14:25:31.638562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.258 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.258 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:02.258 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.258 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.258 14:25:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:02.515 14:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.515 14:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.774 nvme0n1 00:24:03.033 14:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:03.033 14:25:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:03.033 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:03.033 Zero copy mechanism will not be used. 00:24:03.033 Running I/O for 2 seconds... 00:24:04.938 00:24:04.938 Latency(us) 00:24:04.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.938 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:04.938 nvme0n1 : 2.00 5620.76 702.60 0.00 0.00 2842.17 600.75 10728.49 00:24:04.938 =================================================================================================================== 00:24:04.938 Total : 5620.76 702.60 0.00 0.00 2842.17 600.75 10728.49 00:24:04.938 0 00:24:04.938 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.938 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.938 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.938 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.938 | select(.opcode=="crc32c") 00:24:04.938 | "\(.module_name) \(.executed)"' 00:24:04.938 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1011094 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1011094 ']' 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1011094 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1011094 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1011094' 00:24:05.202 killing process with pid 1011094 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1011094 00:24:05.202 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.202 00:24:05.202 Latency(us) 00:24:05.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.202 =================================================================================================================== 00:24:05.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.202 14:25:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1011094 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1011619 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1011619 /var/tmp/bperf.sock 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1011619 ']' 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.498 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.759 [2024-07-25 14:25:35.157113] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:05.759 [2024-07-25 14:25:35.157194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1011619 ] 00:24:05.759 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.759 [2024-07-25 14:25:35.216749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.759 [2024-07-25 14:25:35.321652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.759 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:05.759 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:05.759 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.759 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.759 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:06.324 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.324 14:25:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.581 nvme0n1 00:24:06.581 14:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.581 14:25:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.840 Running I/O for 2 seconds... 00:24:08.743 00:24:08.743 Latency(us) 00:24:08.743 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.743 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:08.743 nvme0n1 : 2.01 20548.30 80.27 0.00 0.00 6214.55 2694.26 15631.55 00:24:08.743 =================================================================================================================== 00:24:08.743 Total : 20548.30 80.27 0.00 0.00 6214.55 2694.26 15631.55 00:24:08.743 0 00:24:08.743 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:08.743 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:08.743 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:08.743 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:08.743 | select(.opcode=="crc32c") 00:24:08.743 | "\(.module_name) \(.executed)"' 00:24:08.743 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1011619 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1011619 ']' 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1011619 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1011619 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1011619' 00:24:09.001 killing process with pid 1011619 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1011619 00:24:09.001 Received shutdown signal, test time was about 2.000000 seconds 00:24:09.001 00:24:09.001 Latency(us) 00:24:09.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.001 =================================================================================================================== 00:24:09.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:09.001 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1011619 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1012030 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1012030 /var/tmp/bperf.sock 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1012030 ']' 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:09.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.260 14:25:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:09.260 [2024-07-25 14:25:38.878450] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:09.260 [2024-07-25 14:25:38.878540] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012030 ] 00:24:09.260 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:09.260 Zero copy mechanism will not be used. 00:24:09.260 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.518 [2024-07-25 14:25:38.936210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.518 [2024-07-25 14:25:39.040530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.518 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.518 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:09.518 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.518 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.518 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:10.084 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.084 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:10.344 nvme0n1 00:24:10.344 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:10.344 14:25:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:10.344 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:10.344 Zero copy mechanism will not be used. 00:24:10.344 Running I/O for 2 seconds... 00:24:12.249 00:24:12.249 Latency(us) 00:24:12.249 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:12.249 nvme0n1 : 2.00 6638.71 829.84 0.00 0.00 2398.23 1723.35 12039.21 00:24:12.249 =================================================================================================================== 00:24:12.249 Total : 6638.71 829.84 0.00 0.00 2398.23 1723.35 12039.21 00:24:12.249 0 00:24:12.249 14:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:12.249 14:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:12.249 14:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:12.249 14:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:12.249 | select(.opcode=="crc32c") 00:24:12.249 | "\(.module_name) \(.executed)"' 00:24:12.249 14:25:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1012030 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1012030 ']' 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1012030 00:24:12.508 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012030 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012030' 00:24:12.768 killing process with pid 1012030 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1012030 00:24:12.768 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.768 00:24:12.768 Latency(us) 00:24:12.768 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.768 =================================================================================================================== 00:24:12.768 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.768 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1012030 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1010662 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1010662 ']' 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1010662 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1010662 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1010662' 00:24:13.029 killing process with pid 1010662 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1010662 00:24:13.029 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1010662 00:24:13.288 00:24:13.288 real 0m15.638s 00:24:13.288 user 0m29.958s 00:24:13.288 sys 0m4.575s 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:13.288 ************************************ 00:24:13.288 END TEST nvmf_digest_clean 00:24:13.288 ************************************ 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:13.288 ************************************ 00:24:13.288 START TEST nvmf_digest_error 00:24:13.288 ************************************ 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1012472 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1012472 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1012472 ']' 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.288 14:25:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.288 [2024-07-25 14:25:42.852705] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:13.288 [2024-07-25 14:25:42.852776] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.288 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.288 [2024-07-25 14:25:42.913704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.547 [2024-07-25 14:25:43.024245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.547 [2024-07-25 14:25:43.024302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.547 [2024-07-25 14:25:43.024334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.547 [2024-07-25 14:25:43.024346] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.547 [2024-07-25 14:25:43.024363] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.547 [2024-07-25 14:25:43.024402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.547 [2024-07-25 14:25:43.096959] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.547 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.808 null0 00:24:13.808 [2024-07-25 14:25:43.209685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.808 [2024-07-25 14:25:43.233920] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1012616 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1012616 /var/tmp/bperf.sock 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1012616 ']' 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:13.808 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.809 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.809 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.809 [2024-07-25 14:25:43.285583] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:13.809 [2024-07-25 14:25:43.285658] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1012616 ] 00:24:13.809 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.809 [2024-07-25 14:25:43.343179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.809 [2024-07-25 14:25:43.450077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.069 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.069 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:14.069 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.069 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.398 14:25:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.967 nvme0n1 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:14.967 14:25:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.967 Running I/O for 2 seconds... 00:24:14.967 [2024-07-25 14:25:44.471758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.471822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.471842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.486811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.486861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.486879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.502272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.502309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.502342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.513958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.513989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.514021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.526906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.526959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.526978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.538597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.967 [2024-07-25 14:25:44.538642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.967 [2024-07-25 14:25:44.538661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.967 [2024-07-25 14:25:44.551336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.551366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.551399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.968 [2024-07-25 14:25:44.563340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.563384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.563402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.968 [2024-07-25 14:25:44.575464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.575494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.575525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.968 [2024-07-25 14:25:44.587215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.587257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.587276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.968 [2024-07-25 14:25:44.600591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.600621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.600652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.968 [2024-07-25 14:25:44.612390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:14.968 [2024-07-25 14:25:44.612422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.968 [2024-07-25 14:25:44.612454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.628007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.628054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.628088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.642642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.642673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.642705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.654118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.654148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.654181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.669936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.669982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.670009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.680554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.680585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.680617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.696364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.696393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.696425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.712548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.712578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.712610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.728768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.228 [2024-07-25 14:25:44.728799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:6562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.228 [2024-07-25 14:25:44.728830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.228 [2024-07-25 14:25:44.740246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.740277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.740310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.754501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.754531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.754564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.768887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.768918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.768950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.780571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.780601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.780634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.795546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.795583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.795617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.810180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.810211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.810243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.825717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.825747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.825779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.837254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.837286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.837319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.852757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.852787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.852819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.229 [2024-07-25 14:25:44.867758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.229 [2024-07-25 14:25:44.867787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.229 [2024-07-25 14:25:44.867819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.489 [2024-07-25 14:25:44.883794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.489 [2024-07-25 14:25:44.883826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.489 [2024-07-25 14:25:44.883858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.489 [2024-07-25 14:25:44.895435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.489 [2024-07-25 14:25:44.895480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.489 [2024-07-25 14:25:44.895498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.489 [2024-07-25 14:25:44.908842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.489 [2024-07-25 14:25:44.908873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.908905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.922511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.922556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.922573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.935534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.935579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.935597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.946567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.946596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.946627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.959128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.959160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.959193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.972831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.972862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.972895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.985889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.985920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.985952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:44.996284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:44.996315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:44.996348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.011337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.011369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.011401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.022252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.022294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.022337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.034448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.034478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.034510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.046566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.046612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.046629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.059455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.059499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.059518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.072938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.072968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.073001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.084710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.084740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.084772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.096952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.096996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.097013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.111075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.111106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.111123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.126185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.126215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.126249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.490 [2024-07-25 14:25:45.138980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.490 [2024-07-25 14:25:45.139041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.490 [2024-07-25 14:25:45.139069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.152597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.152643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.751 [2024-07-25 14:25:45.152660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.163124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.163170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.751 [2024-07-25 14:25:45.163188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.175215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.175245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.751 [2024-07-25 14:25:45.175282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.189513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.189558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.751 [2024-07-25 14:25:45.189576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.201258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.201289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.751 [2024-07-25 14:25:45.201328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.751 [2024-07-25 14:25:45.216213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.751 [2024-07-25 14:25:45.216258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.216275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.228331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.228365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:22044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.228383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.243184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.243215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.243248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.255442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.255473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.255505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.266861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.266891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.266924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.279705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.279739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.279772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.292210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.292242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.292276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.302627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.302657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.302689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.317586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.317624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.317660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.331100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.331164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.331184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.342273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.342304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.342341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.355402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.355456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.355475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.367433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.367462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.367495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.380487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.380516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.380549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.752 [2024-07-25 14:25:45.392171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:15.752 [2024-07-25 14:25:45.392214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.752 [2024-07-25 14:25:45.392231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.404601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.404632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:24508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.404650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.416954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.416990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.417023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.430685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.430730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.430747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.442709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.442740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.442772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.457966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.457997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.458029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.468049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.468087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.468121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.483172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.483202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.483235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.495935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.495981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.495999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.506712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.506745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.506779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.520216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.013 [2024-07-25 14:25:45.520246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.013 [2024-07-25 14:25:45.520279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.013 [2024-07-25 14:25:45.531761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.531792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.531824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.545725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.545756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:7646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.545795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.557376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.557422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.557439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.571843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.571873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.571913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.588371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.588401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.588418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.602306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.602337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.602373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.613994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.614023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.614054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.626761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.626791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.626823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.638012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.638054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.638081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.014 [2024-07-25 14:25:45.653499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.014 [2024-07-25 14:25:45.653542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.014 [2024-07-25 14:25:45.653560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.666937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.666967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.666998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.678627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.678656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.678688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.692232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.692268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.692301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.703687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.703716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.703746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.715845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.715874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.715905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.728010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.728051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.728094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.740805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.740834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.740864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.752982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.753011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.753042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.764881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.764909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.764940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.777260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.777289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.777321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.789482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.789510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.789542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.802026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.802080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.802100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.813773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.275 [2024-07-25 14:25:45.813802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.275 [2024-07-25 14:25:45.813833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.275 [2024-07-25 14:25:45.829566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.829596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.829613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.843629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.843694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.843713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.855135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.855165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.855196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.868188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.868236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.868289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.879835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.879864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.879881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.891922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.891951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.891982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.905128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.905158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.905197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.276 [2024-07-25 14:25:45.916533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.276 [2024-07-25 14:25:45.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.276 [2024-07-25 14:25:45.916594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.929922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.929967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.536 [2024-07-25 14:25:45.929984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.942879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.942912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.536 [2024-07-25 14:25:45.942945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.955251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.955295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.536 [2024-07-25 14:25:45.955313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.966969] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.966999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.536 [2024-07-25 14:25:45.967030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.980115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.980148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.536 [2024-07-25 14:25:45.980180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.536 [2024-07-25 14:25:45.991806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.536 [2024-07-25 14:25:45.991834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:45.991865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.007501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.007530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.007547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.022007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.022070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.022090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.033614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.033643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.033674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.048210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.048240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.059664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.059693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.059725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.074071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.074102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:1359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.074135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.086106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.086135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.086167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.097899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.097928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.097959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.111381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.111425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.111442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.123155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.123185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.123230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.136225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.136256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.136288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.148282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.148326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.148344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.162901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.162929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.162967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.537 [2024-07-25 14:25:46.175101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.537 [2024-07-25 14:25:46.175145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.537 [2024-07-25 14:25:46.175161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.189397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.189443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.189460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.201996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.202034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.212997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.213056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.213089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.227362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.227392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.227424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.238923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.238958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.238990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.253416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.253459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.253475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.269368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.269398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.269414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.284736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.284767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.284800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.295563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.295591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.295621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.796 [2024-07-25 14:25:46.311704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.796 [2024-07-25 14:25:46.311733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.796 [2024-07-25 14:25:46.311766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.328087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.328117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.328154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.338928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.338959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.338993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.354162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.354208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.354224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.369656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.369714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.369732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.386243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.386276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.386311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.398263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.398310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.398343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.414679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.414714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.414745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.429434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.429496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.797 [2024-07-25 14:25:46.442231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:16.797 [2024-07-25 14:25:46.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.797 [2024-07-25 14:25:46.442306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.055 [2024-07-25 14:25:46.454424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x765cb0) 00:24:17.055 [2024-07-25 14:25:46.454487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.055 [2024-07-25 14:25:46.454508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.055 00:24:17.055 Latency(us) 00:24:17.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.055 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:17.055 nvme0n1 : 2.00 19433.28 75.91 0.00 0.00 6579.91 3592.34 22136.60 00:24:17.055 =================================================================================================================== 00:24:17.055 Total : 19433.28 75.91 0.00 0.00 6579.91 3592.34 22136.60 00:24:17.055 0 00:24:17.055 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:17.055 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:17.055 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:17.055 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:17.055 | .driver_specific 00:24:17.055 | .nvme_error 00:24:17.055 | .status_code 00:24:17.055 | .command_transient_transport_error' 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 152 > 0 )) 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1012616 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1012616 ']' 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1012616 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012616 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012616' 00:24:17.313 killing process with pid 1012616 00:24:17.313 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1012616 00:24:17.313 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.313 00:24:17.313 Latency(us) 00:24:17.313 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.313 =================================================================================================================== 00:24:17.314 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.314 14:25:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1012616 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1013028 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1013028 /var/tmp/bperf.sock 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1013028 ']' 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:17.572 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.572 [2024-07-25 14:25:47.052283] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:17.572 [2024-07-25 14:25:47.052359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013028 ] 00:24:17.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.572 Zero copy mechanism will not be used. 00:24:17.572 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.572 [2024-07-25 14:25:47.111140] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.572 [2024-07-25 14:25:47.218991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.830 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.830 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:17.830 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.830 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.088 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.345 nvme0n1 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.345 14:25:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.345 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:18.345 Zero copy mechanism will not be used. 00:24:18.345 Running I/O for 2 seconds... 00:24:18.607 [2024-07-25 14:25:48.002364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.607 [2024-07-25 14:25:48.002453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.607 [2024-07-25 14:25:48.002487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.607 [2024-07-25 14:25:48.007318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.607 [2024-07-25 14:25:48.007352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.607 [2024-07-25 14:25:48.007388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.607 [2024-07-25 14:25:48.012346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.607 [2024-07-25 14:25:48.012378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.607 [2024-07-25 14:25:48.012396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.607 [2024-07-25 14:25:48.017535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.017565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.017583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.022503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.022533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.022566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.028270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.028316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.028332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.034548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.034579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.034611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.039801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.039846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.039864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.044931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.044975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.044991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.050185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.050215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.050233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.053530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.053564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.053597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.057595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.057640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.057657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.062523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.062552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.062568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.067597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.067625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.067642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.072747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.072808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.076980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.077010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.077027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.080552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.080581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.080612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.087904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.087949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.087965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.095639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.095684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.095701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.103339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.103385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.103401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.111088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.111119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.111136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.118831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.118874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.118890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.126462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.126511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.126527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.134175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.134220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.608 [2024-07-25 14:25:48.134236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.608 [2024-07-25 14:25:48.141844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.608 [2024-07-25 14:25:48.141887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.141904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.149596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.149626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.149642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.157457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.157487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.157523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.165024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.165073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.165109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.172833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.172877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.172893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.180486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.180519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.180535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.188194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.188243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.188259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.195874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.195903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.195919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.202486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.202529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.202544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.207920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.207950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.207967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.212781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.212811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.212842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.218202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.218232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.223900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.223929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.223944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.230008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.230056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.230082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.235365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.235397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.235415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.241333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.241379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.241396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.246699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.246731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.246748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.250078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.250108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.250125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.609 [2024-07-25 14:25:48.255363] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.609 [2024-07-25 14:25:48.255394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.609 [2024-07-25 14:25:48.255426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.869 [2024-07-25 14:25:48.261089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.869 [2024-07-25 14:25:48.261123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.869 [2024-07-25 14:25:48.261140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.869 [2024-07-25 14:25:48.266729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.869 [2024-07-25 14:25:48.266759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.869 [2024-07-25 14:25:48.266797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.869 [2024-07-25 14:25:48.272847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.869 [2024-07-25 14:25:48.272882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.869 [2024-07-25 14:25:48.272916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.869 [2024-07-25 14:25:48.278976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.869 [2024-07-25 14:25:48.279007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.869 [2024-07-25 14:25:48.279025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.869 [2024-07-25 14:25:48.284614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.869 [2024-07-25 14:25:48.284644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.284677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.290606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.290651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.290668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.296394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.296424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.296460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.301952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.301998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.302015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.307922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.307971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.307988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.314106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.314152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.314169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.319826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.319865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.319898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.325948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.326011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.332266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.332298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.332315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.339694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.339724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.339755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.345176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.345207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.345224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.350481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.350512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.350529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.355778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.355809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.355827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.361353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.361384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.361401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.364720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.364750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.364766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.370087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.370117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.370134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.375271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.375302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.375319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.380940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.380970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.380986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.386919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.386949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.386982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.392970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.393000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.393033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.398425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.398455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.398488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.403842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.403871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.403903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.409378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.409422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.409439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.414721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.414766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.870 [2024-07-25 14:25:48.414789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.870 [2024-07-25 14:25:48.419621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.870 [2024-07-25 14:25:48.419650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.419681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.425207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.425237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.425269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.430779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.430807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.430839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.435572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.435601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.435617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.440401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.440429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.440460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.445096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.445127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.445144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.450839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.450869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.450902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.457802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.457831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.457862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.464688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.464735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.464752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.470932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.470961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.470979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.476552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.476582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.476614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.482610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.482638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.482654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.488694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.488737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.488753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.494929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.494957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.494973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.500740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.500783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.500800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.505879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.505907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.505922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.511319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.511351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.511368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.871 [2024-07-25 14:25:48.517137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:18.871 [2024-07-25 14:25:48.517168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.871 [2024-07-25 14:25:48.517186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.524514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.524544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.524561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.531093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.531125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.531142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.537759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.537802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.537818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.543894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.543938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.543954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.549818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.549849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.549865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.556047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.556101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.556119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.561649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.561679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.561696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.568050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.568089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.568116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.130 [2024-07-25 14:25:48.574739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.130 [2024-07-25 14:25:48.574770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.130 [2024-07-25 14:25:48.574788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.579295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.579330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.579347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.584721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.584764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.584779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.591573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.591602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.591617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.597714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.597742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.597773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.603929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.603958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.603990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.609995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.610024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.610055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.616204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.616234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.616266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.622255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.622291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.622309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.628456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.628486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.628519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.634076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.634121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.634138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.638968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.638998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.639029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.643943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.643972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.644003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.649090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.649120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.649136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.654521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.654550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.654581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.660565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.660596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.660613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.666653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.666685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.666702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.672774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.672803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.672834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.678762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.678790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.678820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.684959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.684988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.685003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.691115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.691146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.691177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.697282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.697327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.697343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.703525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.703554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.703584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.709697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.709727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.709758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.716385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.131 [2024-07-25 14:25:48.716415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.131 [2024-07-25 14:25:48.716431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.131 [2024-07-25 14:25:48.722564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.722608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.722629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.727621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.727651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.727684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.732725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.732755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.732772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.738042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.738094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.738111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.743659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.743701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.743717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.748650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.748678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.748707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.754245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.754275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.754307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.759732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.759762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.759795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.765515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.765544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.765575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.771219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.771250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.771267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.776657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.776688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.776706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.132 [2024-07-25 14:25:48.781866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.132 [2024-07-25 14:25:48.781896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.132 [2024-07-25 14:25:48.781928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.787405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.787436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.787453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.792491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.792522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.792539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.797630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.797675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.797692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.801136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.801165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.801197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.806424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.806451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.806482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.811528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.811557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.811594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.816405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.816433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.816465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.821362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.821406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.821422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.826171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.826200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.826217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.831020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.391 [2024-07-25 14:25:48.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.391 [2024-07-25 14:25:48.831075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.391 [2024-07-25 14:25:48.836038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.836079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.836099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.840875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.840905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.840937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.845656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.845683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.845714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.850595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.850622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.850653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.855564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.855597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.855629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.860303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.860349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.860365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.864901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.864931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.864962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.869980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.870009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.870042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.874950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.874979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.875011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.879771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.879800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.879832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.884726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.884753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.884782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.889951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.889977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.895116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.895145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.895178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.900229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.900273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.900290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.905608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.905654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.905671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.910988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.911018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.911049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.915831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.915861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.915886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.920874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.920923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.920941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.926592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.926622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.926639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.930459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.930487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.930518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.938272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.938316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.938333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.944790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.944834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.944859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.951656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.951686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.951718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.957564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.957594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.392 [2024-07-25 14:25:48.957625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.392 [2024-07-25 14:25:48.963263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.392 [2024-07-25 14:25:48.963294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.963312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.969990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.970021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.970052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.975313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.975369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.975400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.980265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.980295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.980312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.985293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.985323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.985340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.990222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.990251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.990267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:48.995414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:48.995449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:48.995481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.001903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.001933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.001965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.009139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.009171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.009188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.016664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.016694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.016726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.024460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.024505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.024522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.032445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.032509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.393 [2024-07-25 14:25:49.040284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.393 [2024-07-25 14:25:49.040316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.393 [2024-07-25 14:25:49.040333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.048076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.048117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.048148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.055929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.055961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.055978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.063800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.063845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.063861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.071630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.071661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.071694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.079238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.079284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.086993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.087042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.095193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.095224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.095241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.103406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.103437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.103475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.111479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.111509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.111541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.119349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.119398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.119416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.125961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.125992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.126018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.131884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.131932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.135288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.135317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.135348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.140464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.140510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.654 [2024-07-25 14:25:49.145901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.654 [2024-07-25 14:25:49.145932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.654 [2024-07-25 14:25:49.145965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.151178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.151209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.151225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.156699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.156728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.162705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.162734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.162766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.168634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.168662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.168694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.174863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.174913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.174930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.180627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.180656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.180687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.187227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.187259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.187277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.195030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.195085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.195117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.202443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.202476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.202493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.209988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.210020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.210038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.216210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.216244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.216261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.221887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.221918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.221935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.227645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.227691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.227707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.233867] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.233898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.233915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.239600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.239628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.239658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.245414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.245445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.245462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.251444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.251475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.251492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.258507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.258538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.258569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.264557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.264588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.264606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.271382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.271413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.271431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.278364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.278411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.278428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.285486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.285516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.285555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.291381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.291412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.291429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.297425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.655 [2024-07-25 14:25:49.297470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.655 [2024-07-25 14:25:49.297487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.655 [2024-07-25 14:25:49.303075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.656 [2024-07-25 14:25:49.303106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.656 [2024-07-25 14:25:49.303123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.918 [2024-07-25 14:25:49.308426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.308458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.308476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.313491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.313522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.313554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.318319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.318363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.318380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.323433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.323463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.323480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.328551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.328580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.328612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.333554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.333589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.333606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.338477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.338520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.338536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.343697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.343756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.348767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.348797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.348813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.353817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.353846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.353863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.358865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.358895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.358911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.363881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.363911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.363928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.368876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.368906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.368923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.373919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.373948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.373964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.378935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.378964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.378981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.384019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.384072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.384091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.389319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.389364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.389380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.394289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.394319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.394337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.399104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.399135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.399152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.404229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.404258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.404275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.409091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.409120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.409137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.413875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.413903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.919 [2024-07-25 14:25:49.413934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.919 [2024-07-25 14:25:49.416775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.919 [2024-07-25 14:25:49.416803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.416840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.421579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.421606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.421638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.426456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.426485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.426517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.431353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.431398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.431414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.436385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.436415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.436447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.441325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.441353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.441369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.446354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.446397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.446413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.451375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.451402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.451418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.456362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.456405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.456421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.461267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.461297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.461315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.466203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.466247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.466263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.471126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.471156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.471173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.475849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.475877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.475908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.480757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.480785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.480816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.485842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.485871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.485904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.490471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.490499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.490530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.495452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.495479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.495495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.500257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.500286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.500307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.505095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.505123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.505139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.510043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.510094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.510112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.514840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.514882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.514897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.519654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.519682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.519698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.524554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.920 [2024-07-25 14:25:49.524597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.920 [2024-07-25 14:25:49.524612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.920 [2024-07-25 14:25:49.529957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.529988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.530005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.534823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.534869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.534886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.539961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.539989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.540020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.544859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.544898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.544916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.549643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.549672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.549703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.554617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.554646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.554677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.559606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.559635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.559666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.921 [2024-07-25 14:25:49.564658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:19.921 [2024-07-25 14:25:49.564688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.921 [2024-07-25 14:25:49.564704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.569587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.569617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.569650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.574544] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.574573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.574605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.579612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.579640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.579673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.584824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.584851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.584881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.590206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.590236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.590268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.596003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.596051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.596082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.601161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.601192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.601208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.606737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.606781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.606799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.612413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.612443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.612475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.618738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.618768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.618800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.625731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.625761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.625778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.631635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.631665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.181 [2024-07-25 14:25:49.631698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.181 [2024-07-25 14:25:49.637294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.181 [2024-07-25 14:25:49.637325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.637347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.643440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.643470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.643502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.649472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.649502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.649534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.655326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.655358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.655376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.661189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.661220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.661237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.667165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.667197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.667214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.674171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.674202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.674220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.680264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.680295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.680313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.686629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.686659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.686693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.692579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.692615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.692633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.698990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.699021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.699038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.704369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.704418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.708661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.708692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.708724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.713553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.713597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.713613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.719742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.719772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.719788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.725478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.725507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.725539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.730728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.730755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.730786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.736106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.736136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.736153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.741171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.741202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.741219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.746539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.746567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.746600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.752198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.182 [2024-07-25 14:25:49.752242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.182 [2024-07-25 14:25:49.752259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.182 [2024-07-25 14:25:49.757909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.757939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.757972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.763384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.763446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.768242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.768273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.768291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.773216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.773248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.773265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.778799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.778828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.778859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.784528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.784575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.784601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.790037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.790079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.790098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.796859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.796891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.796909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.804925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.804968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.804984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.812714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.812745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.812761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.820269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.820302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.820319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.183 [2024-07-25 14:25:49.828436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.183 [2024-07-25 14:25:49.828469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.183 [2024-07-25 14:25:49.828486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.836432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.836479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.836495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.844579] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.844611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.844650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.852484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.852521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.852538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.860667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.860699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.860717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.868944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.868979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.869012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.876705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.876737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.876754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.884623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.884656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.884673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.892313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.892355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.892374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.899876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.899907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.899925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.907578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.907611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.442 [2024-07-25 14:25:49.907629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.442 [2024-07-25 14:25:49.914340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.442 [2024-07-25 14:25:49.914386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.914403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.919965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.919995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.920012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.925040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.925081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.925100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.928694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.928738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.928754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.933841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.933871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.933903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.941465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.941493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.941524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.948433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.948463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.948479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.955192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.955242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.955263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.961871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.961903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.961920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.967959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.967990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.968013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.974193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.974224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.974241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.979558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.979588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.979620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.984876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.984944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.443 [2024-07-25 14:25:49.990793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9dc290) 00:24:20.443 [2024-07-25 14:25:49.990824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.443 [2024-07-25 14:25:49.990855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.443 00:24:20.443 Latency(us) 00:24:20.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.443 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:20.443 nvme0n1 : 2.00 5324.89 665.61 0.00 0.00 3000.35 634.12 11068.30 00:24:20.443 =================================================================================================================== 00:24:20.443 Total : 5324.89 665.61 0.00 0.00 3000.35 634.12 11068.30 00:24:20.443 0 00:24:20.443 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.443 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.443 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:20.443 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.443 | .driver_specific 00:24:20.443 | .nvme_error 00:24:20.443 | .status_code 00:24:20.443 | .command_transient_transport_error' 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 343 > 0 )) 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1013028 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1013028 ']' 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1013028 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013028 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013028' 00:24:20.701 killing process with pid 1013028 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1013028 00:24:20.701 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.701 00:24:20.701 Latency(us) 00:24:20.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.701 =================================================================================================================== 00:24:20.701 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.701 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1013028 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1013432 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1013432 /var/tmp/bperf.sock 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1013432 ']' 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.959 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.959 [2024-07-25 14:25:50.568143] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:20.959 [2024-07-25 14:25:50.568217] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013432 ] 00:24:20.959 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.217 [2024-07-25 14:25:50.626455] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.217 [2024-07-25 14:25:50.735246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.217 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.217 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:21.217 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.217 14:25:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.475 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:21.476 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.476 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.476 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.476 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.476 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.077 nvme0n1 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:22.077 14:25:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.077 Running I/O for 2 seconds... 00:24:22.077 [2024-07-25 14:25:51.592896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190ee5c8 00:24:22.077 [2024-07-25 14:25:51.593895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.593935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.604265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190fac10 00:24:22.077 [2024-07-25 14:25:51.605211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.605240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.618577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e1b48 00:24:22.077 [2024-07-25 14:25:51.620321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.620350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.629568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190edd58 00:24:22.077 [2024-07-25 14:25:51.631213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.631244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.639499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190fe2e8 00:24:22.077 [2024-07-25 14:25:51.640296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.640324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.651609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e4578 00:24:22.077 [2024-07-25 14:25:51.652582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.652623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.663572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190fc560 00:24:22.077 [2024-07-25 14:25:51.664564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.664606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.677321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f7100 00:24:22.077 [2024-07-25 14:25:51.678898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.678925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.687775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e5220 00:24:22.077 [2024-07-25 14:25:51.688819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.688861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.699422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f20d8 00:24:22.077 [2024-07-25 14:25:51.700287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:9194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.700316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.711603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f4298 00:24:22.077 [2024-07-25 14:25:51.712683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.712711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.077 [2024-07-25 14:25:51.722558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190fa7d8 00:24:22.077 [2024-07-25 14:25:51.723485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.077 [2024-07-25 14:25:51.723514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.734199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190ea248 00:24:22.338 [2024-07-25 14:25:51.734912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.734940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.748626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e7818 00:24:22.338 [2024-07-25 14:25:51.750531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.750557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.756893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190eea00 00:24:22.338 [2024-07-25 14:25:51.757961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.758003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.769200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190ec408 00:24:22.338 [2024-07-25 14:25:51.770410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.780975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e4140 00:24:22.338 [2024-07-25 14:25:51.781764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.781792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.794264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e5a90 00:24:22.338 [2024-07-25 14:25:51.795817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.795859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.806426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190eaab8 00:24:22.338 [2024-07-25 14:25:51.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.808224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.814669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e5220 00:24:22.338 [2024-07-25 14:25:51.815568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.815594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.828684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190eb760 00:24:22.338 [2024-07-25 14:25:51.830139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.830167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.839444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e88f8 00:24:22.338 [2024-07-25 14:25:51.840585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.840619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.852722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e01f8 00:24:22.338 [2024-07-25 14:25:51.854677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.854707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.338 [2024-07-25 14:25:51.861436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e8d30 00:24:22.338 [2024-07-25 14:25:51.862446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.338 [2024-07-25 14:25:51.862473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.875731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e3d08 00:24:22.339 [2024-07-25 14:25:51.877298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.877325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.886227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f57b0 00:24:22.339 [2024-07-25 14:25:51.887936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.887965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.897972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190e01f8 00:24:22.339 [2024-07-25 14:25:51.899348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.899377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.909540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190eff18 00:24:22.339 [2024-07-25 14:25:51.910677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.910703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.920941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f1430 00:24:22.339 [2024-07-25 14:25:51.922245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.922272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.932875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.339 [2024-07-25 14:25:51.933094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.933123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.944848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.339 [2024-07-25 14:25:51.945031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.945074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.957240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.339 [2024-07-25 14:25:51.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.957517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.969649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.339 [2024-07-25 14:25:51.969877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.969903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.339 [2024-07-25 14:25:51.981963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.339 [2024-07-25 14:25:51.982157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.339 [2024-07-25 14:25:51.982188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:51.994967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:51.995214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:51.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.007204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.007414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.007441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.019210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.019396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.019425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.031498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.031731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.031758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.043677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.043865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.043892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.056079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.056268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.068775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.069023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.069050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.081745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.081915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.081946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.093997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.094192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.094222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.106086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.106384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.106412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.599 [2024-07-25 14:25:52.118682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.599 [2024-07-25 14:25:52.118877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.599 [2024-07-25 14:25:52.118905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.131138] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.131310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.131341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.143257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.143496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.143523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.155257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.155562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.155589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.167385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.167676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.167719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.179541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.179782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.179810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.191727] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.191892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.191921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.203819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.204013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.216021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.216217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.216248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.228278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.228567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.228593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.600 [2024-07-25 14:25:52.240431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.600 [2024-07-25 14:25:52.240597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.600 [2024-07-25 14:25:52.240626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.253157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.253406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.253432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.265288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.265474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.265507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.278109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.278272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.290535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.290701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.290728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.302690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.302854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.302884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.314552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.314736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.314763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.326674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.326841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.326870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.338783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.338948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.858 [2024-07-25 14:25:52.338975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.858 [2024-07-25 14:25:52.350855] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.858 [2024-07-25 14:25:52.351020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.351069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.362849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.363048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.363088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.375478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.375708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.375737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.387559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.387746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.387773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.399792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.399965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.399993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.411796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.411962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.411992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.423713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.423902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.423944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.435708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.435872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.435902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.447757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.447920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.447949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.459821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.460017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.460044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.471862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.472027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.472056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.483911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.484095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.484126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.496000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.496214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.496242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:22.859 [2024-07-25 14:25:52.508285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:22.859 [2024-07-25 14:25:52.508484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.859 [2024-07-25 14:25:52.508514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.520789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.521000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.521029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.532696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.532864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.532893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.544769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.544979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.545008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.556776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.556989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.557019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.568895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.569077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.569109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.580862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.581028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.581068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.592821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.592994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.593036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.604941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.605132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.605161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.617512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.617720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.617747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.630081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.630259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.630291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.642395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.642562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.642592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.654395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.654560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:14123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.654589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.666429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.117 [2024-07-25 14:25:52.666596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.117 [2024-07-25 14:25:52.666625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.117 [2024-07-25 14:25:52.678336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.678564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.678590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.690471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.690665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.690691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.702511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.702728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.702769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.714617] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.714785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.714814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.726655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.726897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.726924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.738560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.738825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.738850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.750866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.751048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.751100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.118 [2024-07-25 14:25:52.762841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.118 [2024-07-25 14:25:52.763006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.118 [2024-07-25 14:25:52.763035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.377 [2024-07-25 14:25:52.775622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.377 [2024-07-25 14:25:52.775794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.377 [2024-07-25 14:25:52.775824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.377 [2024-07-25 14:25:52.788029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.377 [2024-07-25 14:25:52.788230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.377 [2024-07-25 14:25:52.788258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.800581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.800747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.800776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.812960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.813156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.825161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.825414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.825455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.837198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.837410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.837436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.849196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.849390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.849420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.861290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.861495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.861522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.873291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.873555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.873582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.885942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.886157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.886190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.898389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.898573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.898604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.910439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.910605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.910631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.922456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.922622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.922651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.934402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.934567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.934593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.946269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.946515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.946542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.958327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.958477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.958503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.970264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.970545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.970573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.982481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.982649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:14546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.982676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:52.994342] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:52.994614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:52.994644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:53.006487] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:53.006660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:53.006687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.378 [2024-07-25 14:25:53.018529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.378 [2024-07-25 14:25:53.018792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.378 [2024-07-25 14:25:53.018821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.031234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.031457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.031484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.043988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.044191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.044219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.056022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.056235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.056263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.067982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.068225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.068252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.080160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.080334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.080360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.092449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.637 [2024-07-25 14:25:53.092670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.637 [2024-07-25 14:25:53.092696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.637 [2024-07-25 14:25:53.104831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.105080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.105108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.116972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.117165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.117193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.129470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.129698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.129727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.141930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.142129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.154044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.154221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.154251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.166257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.166469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:16885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.166496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.178128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.178302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.178333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.190324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.190477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.190507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.202280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.202565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.202591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.214843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.215036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.215079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.227721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.227968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.227995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.239922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.240113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.240140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.252350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.252579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.252620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.264638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.264821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.264848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.638 [2024-07-25 14:25:53.276953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.638 [2024-07-25 14:25:53.277190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:18157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.638 [2024-07-25 14:25:53.277219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.289839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.290114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:16232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.290142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.302289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.302575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.302621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.314505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.314671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:18693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.314698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.326632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.326804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.326831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.338840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.339008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.339034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.351189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.351406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.351433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.363508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.363735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.363761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.375819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.376106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.376136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.388188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.388428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.388456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.401090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.401282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.401311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.413519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.413742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.413769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.425784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.425976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.426005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.438422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.438636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.438663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.450731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.450969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.450996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.463218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.463444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.463485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.475514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.475736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.475763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.487931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.488145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.488174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.500426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.500594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.500623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.512564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.512794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.512821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.524535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.524797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.524824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.536937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:23.897 [2024-07-25 14:25:53.537116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.897 [2024-07-25 14:25:53.537146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:23.897 [2024-07-25 14:25:53.549427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:24.154 [2024-07-25 14:25:53.549594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.154 [2024-07-25 14:25:53.549625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.154 [2024-07-25 14:25:53.561829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:24.154 [2024-07-25 14:25:53.561994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.154 [2024-07-25 14:25:53.562023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.154 [2024-07-25 14:25:53.574150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22baf30) with pdu=0x2000190f46d0 00:24:24.154 [2024-07-25 14:25:53.574320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.154 [2024-07-25 14:25:53.574351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:24.154 00:24:24.154 Latency(us) 00:24:24.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.154 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:24.154 nvme0n1 : 2.01 20966.91 81.90 0.00 0.00 6090.98 2694.26 15631.55 00:24:24.154 =================================================================================================================== 00:24:24.154 Total : 20966.91 81.90 0.00 0.00 6090.98 2694.26 15631.55 00:24:24.154 0 00:24:24.154 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:24.154 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:24.154 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:24.154 | .driver_specific 00:24:24.154 | .nvme_error 00:24:24.154 | .status_code 00:24:24.154 | .command_transient_transport_error' 00:24:24.154 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 164 > 0 )) 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1013432 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1013432 ']' 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1013432 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013432 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013432' 00:24:24.411 killing process with pid 1013432 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1013432 00:24:24.411 Received shutdown signal, test time was about 2.000000 seconds 00:24:24.411 00:24:24.411 Latency(us) 00:24:24.411 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.411 =================================================================================================================== 00:24:24.411 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.411 14:25:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1013432 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1013847 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1013847 /var/tmp/bperf.sock 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1013847 ']' 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.669 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.669 [2024-07-25 14:25:54.159409] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:24.669 [2024-07-25 14:25:54.159493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1013847 ] 00:24:24.669 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:24.669 Zero copy mechanism will not be used. 00:24:24.669 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.669 [2024-07-25 14:25:54.216709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.927 [2024-07-25 14:25:54.325484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.927 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.927 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:24.927 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.927 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.184 14:25:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.443 nvme0n1 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:25.443 14:25:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.703 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.703 Zero copy mechanism will not be used. 00:24:25.703 Running I/O for 2 seconds... 00:24:25.703 [2024-07-25 14:25:55.154279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.154621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.703 [2024-07-25 14:25:55.154662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.703 [2024-07-25 14:25:55.161517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.161818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.703 [2024-07-25 14:25:55.161848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.703 [2024-07-25 14:25:55.168541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.168833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.703 [2024-07-25 14:25:55.168862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.703 [2024-07-25 14:25:55.175148] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.175451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.703 [2024-07-25 14:25:55.175480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.703 [2024-07-25 14:25:55.180194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.180497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.703 [2024-07-25 14:25:55.180527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.703 [2024-07-25 14:25:55.185499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.703 [2024-07-25 14:25:55.185809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.185838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.190644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.190959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.190988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.196164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.196516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.196544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.202506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.202810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.202838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.208669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.208975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.209003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.214395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.214693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.214720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.220332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.220615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.220644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.226039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.226156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.226185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.233149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.233528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.233571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.239654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.239934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.239963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.245836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.246175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.246204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.251884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.252193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.252222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.258039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.258336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.258379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.264801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.265126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.265169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.271860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.272157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.272189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.277392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.277718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.277746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.282253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.282548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.282591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.287151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.287445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.287494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.291989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.292268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.292297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.296765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.297047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.297102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.301707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.302023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.302050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.307348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.307640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.307667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.312557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.312879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.317405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.317741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.317768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.704 [2024-07-25 14:25:55.322321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.704 [2024-07-25 14:25:55.322608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.704 [2024-07-25 14:25:55.322637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.327113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.327404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.327432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.332033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.332337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.332365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.336927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.337254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.337281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.341921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.342247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.342275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.346760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.347083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.347111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.705 [2024-07-25 14:25:55.351622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.705 [2024-07-25 14:25:55.351958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.705 [2024-07-25 14:25:55.351987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.356455] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.356764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.356791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.361281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.361581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.361624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.365992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.366284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.366313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.370729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.370995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.371037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.375545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.375794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.375822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.380303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.380580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.380631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.385039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.385310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.385339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.389801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.390085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.390129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.394573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.394849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.394876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.399311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.399600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.399629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.403902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.404192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.404225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.408706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.408964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.408992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.413371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.413628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.413662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.418025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.418293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.418323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.422785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.423039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.423076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.427391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.427653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.427681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.432193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.432485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.432515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.437701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.437989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.438016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.443793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.965 [2024-07-25 14:25:55.444089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.965 [2024-07-25 14:25:55.444131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.965 [2024-07-25 14:25:55.449829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.450146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.450175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.456276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.456644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.456686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.463041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.463360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.463404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.470143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.470450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.470478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.476657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.476931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.476958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.482786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.483187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.483219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.489255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.489516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.489545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.495620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.495887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.495915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.502142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.502439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.502483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.509236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.509553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.509580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.516249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.516649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.516680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.523801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.524149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.524179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.530770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.531080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.531123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.536847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.537201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.537244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.542970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.543247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.543276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.548692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.548966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.548994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.554764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.555079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.555108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.561177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.561452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.561494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.568166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.568424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.568467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.574888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.575301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.575329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.582145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.582431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.582458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.588998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.589263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.589293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.596202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.596596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.596622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.603518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.603814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.966 [2024-07-25 14:25:55.603842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:25.966 [2024-07-25 14:25:55.610548] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:25.966 [2024-07-25 14:25:55.610884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:25.967 [2024-07-25 14:25:55.610912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.227 [2024-07-25 14:25:55.617711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.227 [2024-07-25 14:25:55.618018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.227 [2024-07-25 14:25:55.618046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.227 [2024-07-25 14:25:55.624721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.625152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.625195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.631941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.632304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.632333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.639102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.639469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.639511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.646170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.646443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.646472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.653367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.653693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.653720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.660661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.660969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.660999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.667939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.668311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.668341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.675161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.675468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.675498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.682186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.682473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.682502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.689151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.689482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.689510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.696142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.696510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.696559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.702648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.702920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.702949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.708178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.708450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.708478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.713272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.713530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.713560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.718246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.718542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.718571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.723640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.723934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.723963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.729788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.730079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.730108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.735394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.735731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.735761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.742214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.742523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.742553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.747643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.747917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.747970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.752411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.752731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.757149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.757452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.757481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.761883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.762211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.762240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.766681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.766975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.767004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.228 [2024-07-25 14:25:55.771380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.228 [2024-07-25 14:25:55.771644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.228 [2024-07-25 14:25:55.771673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.776095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.776350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.776393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.780878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.781139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.781168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.785510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.785766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.785795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.790170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.790425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.790455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.794809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.795069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.795098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.799393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.799652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.799681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.803965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.804236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.804265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.808539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.808795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.808824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.813099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.813357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.813386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.817663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.817942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.817971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.822240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.822504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.822533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.826801] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.827056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.827096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.831426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.831692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.831721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.836230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.836499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.836528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.841607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.841875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.841903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.846422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.846684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.846713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.851018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.851292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.851322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.855764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.856022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.856051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.860555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.860814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.860843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.865160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.865425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.865454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.869772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.870035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.870070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.229 [2024-07-25 14:25:55.874458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.229 [2024-07-25 14:25:55.874716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.229 [2024-07-25 14:25:55.874745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.879026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.879306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.879343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.883698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.883962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.883990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.888308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.888561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.888590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.892928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.893192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.893221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.897583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.897839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.897867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.902296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.902586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.902615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.907045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.907316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.907351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.911694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.911949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.911978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.916349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.916609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.916639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.920927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.921194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.921224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.925474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.925741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.925769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.930112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.930380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.930418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.935039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.935301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.935331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.939710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.939976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.940004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.944355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.944606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.944635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.948946] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.949215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.949243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.953564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.953821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.953851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.958160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.958456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.958485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.962836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.963133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.963162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.967449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.967716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.967743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.972124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.491 [2024-07-25 14:25:55.972395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.491 [2024-07-25 14:25:55.972422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.491 [2024-07-25 14:25:55.976822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:55.977110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:55.977149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:55.981519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:55.981772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:55.981800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:55.986208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:55.986475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:55.986503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:55.990787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:55.991043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:55.991079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:55.995348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:55.995606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:55.995634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:55.999892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.000153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.000193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.004526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.004779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.004808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.009223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.009487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.009516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.014230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.014487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.014516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.018857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.019120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.019150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.023453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.023715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.023744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.028086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.028338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.028383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.032722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.032976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.033013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.037274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.037534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.037563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.041886] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.042152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.042180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.046443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.046763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.051034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.051306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.051334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.055736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.056002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.056031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.060375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.060629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.060658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.064941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.065208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.065238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.069518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.069813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.069842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.074095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.074364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.074392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.078704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.078963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.078992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.492 [2024-07-25 14:25:56.083416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.492 [2024-07-25 14:25:56.083686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.492 [2024-07-25 14:25:56.083714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.088245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.088515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.088543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.092929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.093192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.093221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.097711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.097975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.098029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.102360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.102625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.107117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.107395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.107438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.111951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.112219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.112248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.116653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.116920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.116949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.121335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.121631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.121659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.126038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.126301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.126340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.130742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.131006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.131064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.135463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.135717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.135745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.493 [2024-07-25 14:25:56.140090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.493 [2024-07-25 14:25:56.140354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.493 [2024-07-25 14:25:56.140383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.754 [2024-07-25 14:25:56.144696] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.754 [2024-07-25 14:25:56.144962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.754 [2024-07-25 14:25:56.144992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.149286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.149542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.149578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.153967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.154229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.154258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.158518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.158782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.158809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.163194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.163461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.163489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.167874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.168152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.168183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.173172] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.173431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.173460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.177831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.178093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.178122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.182498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.182754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.182782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.187166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.187435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.187463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.191853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.192161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.192191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.196539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.196803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.196856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.201224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.201489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.201542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.205828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.206118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.206147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.210558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.210821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.210874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.215218] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.215473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.215501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.219833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.220111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.220140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.224520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.224760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.224788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.229025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.229286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.229315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.233626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.233862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.233890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.238206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.238461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.238488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.242816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.243057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.243092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.247309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.247548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.247576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.251773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.252018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.252046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.256242] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.256487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.755 [2024-07-25 14:25:56.256515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.755 [2024-07-25 14:25:56.260811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.755 [2024-07-25 14:25:56.261065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.261094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.265747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.266008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.266037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.271592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.271894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.271929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.277845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.278118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.278147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.284643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.284893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.284922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.290980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.291241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.291271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.297041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.297377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.297405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.303878] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.304144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.304173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.310318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.310586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.310615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.317537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.317853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.317881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.324413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.324714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.324744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.331476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.331766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.331796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.338003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.338383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.338412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.344955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.345312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.345356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.351924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.352286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.352315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.358923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.359283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.359327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.365859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.366139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.366168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.372868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.373197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.379896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.380164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.380193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.386674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.387020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.387081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.393669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.393982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.394010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.756 [2024-07-25 14:25:56.400442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:26.756 [2024-07-25 14:25:56.400811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.756 [2024-07-25 14:25:56.400840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.407561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.016 [2024-07-25 14:25:56.407840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.016 [2024-07-25 14:25:56.407870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.414565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.016 [2024-07-25 14:25:56.414852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.016 [2024-07-25 14:25:56.414891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.421326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.016 [2024-07-25 14:25:56.421696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.016 [2024-07-25 14:25:56.421741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.428109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.016 [2024-07-25 14:25:56.428456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.016 [2024-07-25 14:25:56.428499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.435077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.016 [2024-07-25 14:25:56.435367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.016 [2024-07-25 14:25:56.435397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.016 [2024-07-25 14:25:56.442067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.442346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.442390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.449140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.449455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.449485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.455847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.456116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.456146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.462547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.462844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.462873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.469701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.469959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.469988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.476804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.477130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.477160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.484048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.484348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.484377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.491067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.491312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.491342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.497849] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.498178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.498211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.504825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.505183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.505212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.511814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.512083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.512112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.518420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.518714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.518758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.524258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.524522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.524566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.530055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.530330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.530374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.535895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.536154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.536184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.540893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.541199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.545506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.545748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.545777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.550132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.550389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.550417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.554728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.554981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.555014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.559343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.559599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.559627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.563923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.564176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.564205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.568549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.568803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.568841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.017 [2024-07-25 14:25:56.573124] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.017 [2024-07-25 14:25:56.573382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.017 [2024-07-25 14:25:56.573409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.577923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.578179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.578209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.582493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.582732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.582760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.587105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.587349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.587377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.591714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.591957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.591986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.596207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.596453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.596481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.600784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.601039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.601088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.605640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.605896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.605924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.610235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.610487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.610516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.614803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.615041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.615079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.619525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.619765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.619804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.624166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.624407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.624445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.628814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.629097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.629134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.633398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.633654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.633682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.638101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.638347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.638375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.642636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.642879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.642907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.647177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.647432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.647486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.651815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.652092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.652131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.656501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.656744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.656773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.661035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.661291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.661320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.018 [2024-07-25 14:25:56.665658] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.018 [2024-07-25 14:25:56.665910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.018 [2024-07-25 14:25:56.665940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.670418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.670678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.670708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.675075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.675329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.675365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.679921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.680186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.680215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.685447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.685711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.685739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.690903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.691192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.691221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.696310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.696592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.696619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.701778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.702072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.702101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.707285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.707569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.707596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.712970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.713259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.718161] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.718436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.718476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.722853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.723189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.723217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.727640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.727923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.727951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.732552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.732812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.732840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.737471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.737733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.737760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.742395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.742657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.742684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.747229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.747538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.747566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.752048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.752338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.752380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.756846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.757175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.279 [2024-07-25 14:25:56.757204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.279 [2024-07-25 14:25:56.761937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.279 [2024-07-25 14:25:56.762227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.762260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.766710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.766980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.767008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.771529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.771777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.771805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.777004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.777259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.777289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.781929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.782203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.782232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.786615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.786879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.786931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.791432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.791697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.791725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.796236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.796530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.796559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.800908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.801198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.801227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.805701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.805955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.805982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.810592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.815269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.815539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.815567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.820013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.820274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.820303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.824811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.825103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.825132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.829515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.829774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.829801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.834293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.834593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.834620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.839107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.839385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.839412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.844030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.844321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.844349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.848768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.849027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.849055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.853616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.853894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.853922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.858901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.859195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.859225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.864054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.864360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.864403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.870123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.280 [2024-07-25 14:25:56.870526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.280 [2024-07-25 14:25:56.870553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.280 [2024-07-25 14:25:56.876463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.876751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.876778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.883402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.883671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.883724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.890406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.890687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.890739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.897436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.897695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.897729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.903807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.904139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.904168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.910914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.911259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.911302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.918429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.918732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.918761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.281 [2024-07-25 14:25:56.925214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.281 [2024-07-25 14:25:56.925504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.281 [2024-07-25 14:25:56.925546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.932280] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.541 [2024-07-25 14:25:56.932668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.541 [2024-07-25 14:25:56.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.939492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.541 [2024-07-25 14:25:56.939768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.541 [2024-07-25 14:25:56.939796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.946892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.541 [2024-07-25 14:25:56.947227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.541 [2024-07-25 14:25:56.947256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.953810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.541 [2024-07-25 14:25:56.954159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.541 [2024-07-25 14:25:56.954188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.960818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.541 [2024-07-25 14:25:56.961121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.541 [2024-07-25 14:25:56.961150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.541 [2024-07-25 14:25:56.967067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.967413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.967458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:56.973611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.973909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.973941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:56.980872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.981170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.981199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:56.987139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.987415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.987443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:56.992010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.992284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.992313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:56.996880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:56.997171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:56.997201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.001858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.002150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.002179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.006891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.007184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.007213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.012418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.012681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.012708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.018352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.018630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.018657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.023131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.023413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.023440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.027982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.028268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.028296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.032989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.033268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.033297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.037921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.038219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.038246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.042813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.043163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.047735] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.048020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.048048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.052592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.052853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.052885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.057354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.057632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.057659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.062217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.062521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.062550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.067099] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.067409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.067437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.072054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.072342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.072385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.076860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.077161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.077199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.081662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.081965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.081992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.086442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.086727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.086756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.542 [2024-07-25 14:25:57.091220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.542 [2024-07-25 14:25:57.091511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.542 [2024-07-25 14:25:57.091540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.095935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.096281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.096310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.101019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.101314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.101344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.105791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.106115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.106144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.110749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.111104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.111146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.115485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.115730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.115757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.120153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.120434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.120487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.124937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.125235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.125263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.129755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.130033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.130082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.134415] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.134729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.134756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.139283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.139584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.139611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.144264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.144566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.144593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.543 [2024-07-25 14:25:57.148854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22bb270) with pdu=0x2000190fef90 00:24:27.543 [2024-07-25 14:25:57.148923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.543 [2024-07-25 14:25:57.148965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.543 00:24:27.543 Latency(us) 00:24:27.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.543 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:27.543 nvme0n1 : 2.00 5745.06 718.13 0.00 0.00 2777.50 1844.72 9417.77 00:24:27.543 =================================================================================================================== 00:24:27.543 Total : 5745.06 718.13 0.00 0.00 2777.50 1844.72 9417.77 00:24:27.543 0 00:24:27.543 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.543 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.543 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.543 | .driver_specific 00:24:27.543 | .nvme_error 00:24:27.543 | .status_code 00:24:27.543 | .command_transient_transport_error' 00:24:27.543 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 371 > 0 )) 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1013847 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1013847 ']' 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1013847 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1013847 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1013847' 00:24:27.803 killing process with pid 1013847 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1013847 00:24:27.803 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.803 00:24:27.803 Latency(us) 00:24:27.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.803 =================================================================================================================== 00:24:27.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.803 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1013847 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1012472 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1012472 ']' 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1012472 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:28.061 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1012472 00:24:28.321 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:28.321 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:28.321 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1012472' 00:24:28.321 killing process with pid 1012472 00:24:28.321 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1012472 00:24:28.321 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1012472 00:24:28.581 00:24:28.581 real 0m15.196s 00:24:28.581 user 0m29.786s 00:24:28.581 sys 0m4.359s 00:24:28.581 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.581 14:25:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.581 ************************************ 00:24:28.581 END TEST nvmf_digest_error 00:24:28.581 ************************************ 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.581 rmmod nvme_tcp 00:24:28.581 rmmod nvme_fabrics 00:24:28.581 rmmod nvme_keyring 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1012472 ']' 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1012472 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1012472 ']' 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1012472 00:24:28.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1012472) - No such process 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1012472 is not found' 00:24:28.581 Process with pid 1012472 is not found 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.581 14:25:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.490 14:26:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.490 00:24:30.490 real 0m35.240s 00:24:30.490 user 1m0.591s 00:24:30.490 sys 0m10.482s 00:24:30.490 14:26:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.490 14:26:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:30.490 ************************************ 00:24:30.490 END TEST nvmf_digest 00:24:30.490 ************************************ 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.749 ************************************ 00:24:30.749 START TEST nvmf_bdevperf 00:24:30.749 ************************************ 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:30.749 * Looking for test storage... 00:24:30.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.749 14:26:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:32.656 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:32.657 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:32.657 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:32.657 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:32.657 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:32.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:24:32.657 00:24:32.657 --- 10.0.0.2 ping statistics --- 00:24:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.657 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:32.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:24:32.657 00:24:32.657 --- 10.0.0.1 ping statistics --- 00:24:32.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.657 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.657 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1016189 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1016189 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1016189 ']' 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.658 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:32.658 [2024-07-25 14:26:02.269091] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:32.658 [2024-07-25 14:26:02.269178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.658 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.917 [2024-07-25 14:26:02.335356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:32.917 [2024-07-25 14:26:02.444974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.917 [2024-07-25 14:26:02.445026] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.917 [2024-07-25 14:26:02.445049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:32.917 [2024-07-25 14:26:02.445082] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:32.917 [2024-07-25 14:26:02.445094] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.917 [2024-07-25 14:26:02.445195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:32.917 [2024-07-25 14:26:02.445219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:32.917 [2024-07-25 14:26:02.445223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.917 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.917 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:32.917 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:32.917 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:32.917 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 [2024-07-25 14:26:02.594589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 Malloc0 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.177 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.178 [2024-07-25 14:26:02.655699] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.178 { 00:24:33.178 "params": { 00:24:33.178 "name": "Nvme$subsystem", 00:24:33.178 "trtype": "$TEST_TRANSPORT", 00:24:33.178 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.178 "adrfam": "ipv4", 00:24:33.178 "trsvcid": "$NVMF_PORT", 00:24:33.178 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.178 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.178 "hdgst": ${hdgst:-false}, 00:24:33.178 "ddgst": ${ddgst:-false} 00:24:33.178 }, 00:24:33.178 "method": "bdev_nvme_attach_controller" 00:24:33.178 } 00:24:33.178 EOF 00:24:33.178 )") 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:33.178 14:26:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:33.178 "params": { 00:24:33.178 "name": "Nvme1", 00:24:33.178 "trtype": "tcp", 00:24:33.178 "traddr": "10.0.0.2", 00:24:33.178 "adrfam": "ipv4", 00:24:33.178 "trsvcid": "4420", 00:24:33.178 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.178 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.178 "hdgst": false, 00:24:33.178 "ddgst": false 00:24:33.178 }, 00:24:33.178 "method": "bdev_nvme_attach_controller" 00:24:33.178 }' 00:24:33.178 [2024-07-25 14:26:02.703828] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:33.178 [2024-07-25 14:26:02.703914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016330 ] 00:24:33.178 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.178 [2024-07-25 14:26:02.764919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.438 [2024-07-25 14:26:02.878171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.698 Running I/O for 1 seconds... 00:24:34.634 00:24:34.634 Latency(us) 00:24:34.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.634 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:34.634 Verification LBA range: start 0x0 length 0x4000 00:24:34.634 Nvme1n1 : 1.02 8721.08 34.07 0.00 0.00 14618.67 2936.98 15534.46 00:24:34.634 =================================================================================================================== 00:24:34.634 Total : 8721.08 34.07 0.00 0.00 14618.67 2936.98 15534.46 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1016478 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:34.893 { 00:24:34.893 "params": { 00:24:34.893 "name": "Nvme$subsystem", 00:24:34.893 "trtype": "$TEST_TRANSPORT", 00:24:34.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:34.893 "adrfam": "ipv4", 00:24:34.893 "trsvcid": "$NVMF_PORT", 00:24:34.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:34.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:34.893 "hdgst": ${hdgst:-false}, 00:24:34.893 "ddgst": ${ddgst:-false} 00:24:34.893 }, 00:24:34.893 "method": "bdev_nvme_attach_controller" 00:24:34.893 } 00:24:34.893 EOF 00:24:34.893 )") 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:34.893 14:26:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:34.893 "params": { 00:24:34.893 "name": "Nvme1", 00:24:34.893 "trtype": "tcp", 00:24:34.893 "traddr": "10.0.0.2", 00:24:34.893 "adrfam": "ipv4", 00:24:34.893 "trsvcid": "4420", 00:24:34.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:34.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:34.893 "hdgst": false, 00:24:34.893 "ddgst": false 00:24:34.893 }, 00:24:34.893 "method": "bdev_nvme_attach_controller" 00:24:34.893 }' 00:24:34.893 [2024-07-25 14:26:04.508379] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:34.893 [2024-07-25 14:26:04.508452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016478 ] 00:24:34.893 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.152 [2024-07-25 14:26:04.567978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.152 [2024-07-25 14:26:04.678001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.412 Running I/O for 15 seconds... 00:24:37.948 14:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1016189 00:24:37.948 14:26:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:37.948 [2024-07-25 14:26:07.476191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.948 [2024-07-25 14:26:07.476239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.948 [2024-07-25 14:26:07.476273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.948 [2024-07-25 14:26:07.476292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.948 [2024-07-25 14:26:07.476311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.948 [2024-07-25 14:26:07.476327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.948 [2024-07-25 14:26:07.476354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.948 [2024-07-25 14:26:07.476371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.948 [2024-07-25 14:26:07.476387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.948 [2024-07-25 14:26:07.476404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.949 [2024-07-25 14:26:07.476453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.949 [2024-07-25 14:26:07.476501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.949 [2024-07-25 14:26:07.476529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:49824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.476986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.476999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:50032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.949 [2024-07-25 14:26:07.477568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.949 [2024-07-25 14:26:07.477580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:50104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:50136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.477976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.477990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:50192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:50200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:50272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:50280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:50312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.950 [2024-07-25 14:26:07.478580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.950 [2024-07-25 14:26:07.478594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:50376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:50400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:50424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:50432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.951 [2024-07-25 14:26:07.478897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.478923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.478949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.478975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.478991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.951 [2024-07-25 14:26:07.479611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.951 [2024-07-25 14:26:07.479623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.479984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.479996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.480010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.952 [2024-07-25 14:26:07.480022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.480035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1894830 is same with the state(5) to be set 00:24:37.952 [2024-07-25 14:26:07.480075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:37.952 [2024-07-25 14:26:07.480088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:37.952 [2024-07-25 14:26:07.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50824 len:8 PRP1 0x0 PRP2 0x0 00:24:37.952 [2024-07-25 14:26:07.480111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.952 [2024-07-25 14:26:07.480175] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1894830 was disconnected and freed. reset controller. 00:24:37.952 [2024-07-25 14:26:07.483118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.952 [2024-07-25 14:26:07.483198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.952 [2024-07-25 14:26:07.483903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.952 [2024-07-25 14:26:07.483932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.952 [2024-07-25 14:26:07.483948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.952 [2024-07-25 14:26:07.484429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.952 [2024-07-25 14:26:07.484633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.952 [2024-07-25 14:26:07.484653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.952 [2024-07-25 14:26:07.484668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.952 [2024-07-25 14:26:07.487549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.952 [2024-07-25 14:26:07.496561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.952 [2024-07-25 14:26:07.496946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.952 [2024-07-25 14:26:07.496973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.952 [2024-07-25 14:26:07.496989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.952 [2024-07-25 14:26:07.497259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.952 [2024-07-25 14:26:07.497476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.952 [2024-07-25 14:26:07.497496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.952 [2024-07-25 14:26:07.497510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.952 [2024-07-25 14:26:07.500444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.952 [2024-07-25 14:26:07.509727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.952 [2024-07-25 14:26:07.510141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.952 [2024-07-25 14:26:07.510170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.952 [2024-07-25 14:26:07.510186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.952 [2024-07-25 14:26:07.510422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.952 [2024-07-25 14:26:07.510627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.952 [2024-07-25 14:26:07.510646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.952 [2024-07-25 14:26:07.510659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.952 [2024-07-25 14:26:07.513545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.952 [2024-07-25 14:26:07.522851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.952 [2024-07-25 14:26:07.523275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.952 [2024-07-25 14:26:07.523303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.952 [2024-07-25 14:26:07.523319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.952 [2024-07-25 14:26:07.523556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.952 [2024-07-25 14:26:07.523760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.952 [2024-07-25 14:26:07.523779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.952 [2024-07-25 14:26:07.523792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.952 [2024-07-25 14:26:07.526630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.952 [2024-07-25 14:26:07.535938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.952 [2024-07-25 14:26:07.536257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.952 [2024-07-25 14:26:07.536284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.952 [2024-07-25 14:26:07.536300] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.953 [2024-07-25 14:26:07.536516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.953 [2024-07-25 14:26:07.536721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.953 [2024-07-25 14:26:07.536740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.953 [2024-07-25 14:26:07.536753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.953 [2024-07-25 14:26:07.539637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.953 [2024-07-25 14:26:07.549192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.953 [2024-07-25 14:26:07.549619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.953 [2024-07-25 14:26:07.549647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.953 [2024-07-25 14:26:07.549663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.953 [2024-07-25 14:26:07.549900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.953 [2024-07-25 14:26:07.550147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.953 [2024-07-25 14:26:07.550169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.953 [2024-07-25 14:26:07.550182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.953 [2024-07-25 14:26:07.553049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.953 [2024-07-25 14:26:07.562193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.953 [2024-07-25 14:26:07.562602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.953 [2024-07-25 14:26:07.562629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.953 [2024-07-25 14:26:07.562644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.953 [2024-07-25 14:26:07.562885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.953 [2024-07-25 14:26:07.563131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.953 [2024-07-25 14:26:07.563152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.953 [2024-07-25 14:26:07.563166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.953 [2024-07-25 14:26:07.566041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.953 [2024-07-25 14:26:07.575174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.953 [2024-07-25 14:26:07.575501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.953 [2024-07-25 14:26:07.575528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.953 [2024-07-25 14:26:07.575544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.953 [2024-07-25 14:26:07.575760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.953 [2024-07-25 14:26:07.575964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.953 [2024-07-25 14:26:07.575984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.953 [2024-07-25 14:26:07.575996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.953 [2024-07-25 14:26:07.578933] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:37.953 [2024-07-25 14:26:07.588237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:37.953 [2024-07-25 14:26:07.588588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:37.953 [2024-07-25 14:26:07.588616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:37.953 [2024-07-25 14:26:07.588631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:37.953 [2024-07-25 14:26:07.588871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:37.953 [2024-07-25 14:26:07.589101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:37.953 [2024-07-25 14:26:07.589137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:37.953 [2024-07-25 14:26:07.589151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:37.953 [2024-07-25 14:26:07.592021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.601607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.602088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.602154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.602171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.602412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.602616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.602635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.602652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.605525] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.614685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.615095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.615123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.615139] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.615374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.615579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.615598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.615611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.618515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.627926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.628429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.628456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.628472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.628711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.628915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.628935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.628947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.631782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.641104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.641481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.641540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.641766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.641955] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.641975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.641987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.644980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.654321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.654684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.654715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.654731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.654968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.655218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.655240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.655253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.658089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.667428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.667777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.667804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.667819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.668033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.668282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.668303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.668318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.671207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.680492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.680910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.680936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.680952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.681213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.681451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.681471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.681484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.684342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.693606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.694017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.694045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.694083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.694329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.694556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.694576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.694588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.697459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.706667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.707043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.707077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.707108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.707346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.707550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.707569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.707582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.710460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.719711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.720088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.720114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.720130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.242 [2024-07-25 14:26:07.720346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.242 [2024-07-25 14:26:07.720568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.242 [2024-07-25 14:26:07.720587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.242 [2024-07-25 14:26:07.720600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.242 [2024-07-25 14:26:07.723497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.242 [2024-07-25 14:26:07.732748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.242 [2024-07-25 14:26:07.733083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.242 [2024-07-25 14:26:07.733127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.242 [2024-07-25 14:26:07.733143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.733373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.733584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.733604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.733617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.737040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.746538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.746867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.746910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.746926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.747165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.747412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.747432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.747445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.750437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.759663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.760074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.760117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.760353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.760542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.760561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.760574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.763497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.772704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.773013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.773039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.773055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.773315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.773521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.773540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.773553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.776424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.785675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.786031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.786066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.786105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.786347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.786551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.786570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.786582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.789454] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.798730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.799077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.799105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.799120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.799356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.799578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.799598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.799611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.802682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.811772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.812119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.812146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.812161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.812390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.812595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.812615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.812627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.815540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.824868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.825250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.825277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.825293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.825549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.825752] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.825775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.825788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.828700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.837852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.838264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.838292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.838307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.838542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.838746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.838765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.838777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.841575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.850875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.851286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.851313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.851328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.851563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.851766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.851785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.851798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.854595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.863897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.864245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.864273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.864288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.864504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.864708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.864727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.864739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.867621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.877036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.877450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.877477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.877492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.877728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.877931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.877951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.877963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.243 [2024-07-25 14:26:07.880875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.243 [2024-07-25 14:26:07.890274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.243 [2024-07-25 14:26:07.890648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.243 [2024-07-25 14:26:07.890676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.243 [2024-07-25 14:26:07.890692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.243 [2024-07-25 14:26:07.890932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.243 [2024-07-25 14:26:07.891178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.243 [2024-07-25 14:26:07.891199] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.243 [2024-07-25 14:26:07.891213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.502 [2024-07-25 14:26:07.894437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.502 [2024-07-25 14:26:07.903378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.502 [2024-07-25 14:26:07.903724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.903751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.903767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.904003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.904245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.904266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.904280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.907187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.916456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.916827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.916853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.916868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.917101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.917302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.917323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.917336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.920219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.929566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.929911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.929938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.929954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.930218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.930444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.930463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.930475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.933347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.942628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.942973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.943000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.943015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.943282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.943520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.943540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.943553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.946422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.955717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.956070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.956098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.956114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.956348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.956551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.956570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.956587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.959383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.968690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.969037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.969070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.969087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.969326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.969530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.969550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.969562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.972354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.981653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.981998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.982024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.982040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.982298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.982515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.982535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.982548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.985423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.503 [2024-07-25 14:26:07.994878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.503 [2024-07-25 14:26:07.995253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.503 [2024-07-25 14:26:07.995280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.503 [2024-07-25 14:26:07.995295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.503 [2024-07-25 14:26:07.995529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.503 [2024-07-25 14:26:07.995733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.503 [2024-07-25 14:26:07.995752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.503 [2024-07-25 14:26:07.995765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.503 [2024-07-25 14:26:07.998748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.008033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.008388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.008416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.008431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.008669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.008873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.008892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.008904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.011776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.021114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.021523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.021566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.021805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.022009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.022029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.022055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.024964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.034160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.034569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.034596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.034612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.034847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.035077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.035097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.035124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.037992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.047244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.047591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.047618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.047633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.047874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.048104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.048125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.048153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.051028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.060321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.060726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.060753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.060769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.061006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.061240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.061262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.061275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.064166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.073393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.073736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.073763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.073779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.074014] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.074248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.074269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.074282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.077154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.086561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.086908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.086935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.086950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.087212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.087420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.087439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.087456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.090356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.099772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.100119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.100147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.100163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.100398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.504 [2024-07-25 14:26:08.100601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.504 [2024-07-25 14:26:08.100621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.504 [2024-07-25 14:26:08.100634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.504 [2024-07-25 14:26:08.103547] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.504 [2024-07-25 14:26:08.113412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.504 [2024-07-25 14:26:08.113885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.504 [2024-07-25 14:26:08.113940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.504 [2024-07-25 14:26:08.113955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.504 [2024-07-25 14:26:08.114211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.505 [2024-07-25 14:26:08.114418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.505 [2024-07-25 14:26:08.114438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.505 [2024-07-25 14:26:08.114451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.505 [2024-07-25 14:26:08.117338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.505 [2024-07-25 14:26:08.126552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.505 [2024-07-25 14:26:08.126979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.505 [2024-07-25 14:26:08.127008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.505 [2024-07-25 14:26:08.127024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.505 [2024-07-25 14:26:08.127274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.505 [2024-07-25 14:26:08.127489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.505 [2024-07-25 14:26:08.127510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.505 [2024-07-25 14:26:08.127524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.505 [2024-07-25 14:26:08.130789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.505 [2024-07-25 14:26:08.140185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.505 [2024-07-25 14:26:08.140596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.505 [2024-07-25 14:26:08.140632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.505 [2024-07-25 14:26:08.140649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.505 [2024-07-25 14:26:08.140893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.505 [2024-07-25 14:26:08.141124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.505 [2024-07-25 14:26:08.141147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.505 [2024-07-25 14:26:08.141162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.505 [2024-07-25 14:26:08.144449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.505 [2024-07-25 14:26:08.153853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.154201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.154230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.154246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.154487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.154730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.154750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.154764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.157866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.167438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.167813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.167862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.167879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.168135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.168355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.168390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.168403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.171647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.180915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.181231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.181261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.181277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.181532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.181749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.181770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.181783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.185064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.194509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.194880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.194929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.194946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.195173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.195421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.195441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.195454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.198706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.207741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.208177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.208206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.208222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.208455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.208665] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.208685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.208697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.211764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.221171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.221594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.221621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.221637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.221876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.222114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.222137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.222151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.225171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.234532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.234841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.234879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.234912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.235147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.235380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.235400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.235427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.238420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.247876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.248229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.248257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.248273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.248525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.248728] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.248747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.248759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.251754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.261092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.766 [2024-07-25 14:26:08.261519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.766 [2024-07-25 14:26:08.261547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.766 [2024-07-25 14:26:08.261562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.766 [2024-07-25 14:26:08.261809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.766 [2024-07-25 14:26:08.261997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.766 [2024-07-25 14:26:08.262017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.766 [2024-07-25 14:26:08.262029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.766 [2024-07-25 14:26:08.264955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.766 [2024-07-25 14:26:08.274234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.274596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.274623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.274643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.274874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.275088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.275109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.275121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.278011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.287228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.287542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.287570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.287585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.287803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.288007] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.288026] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.288039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.290926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.300420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.300774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.300802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.300817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.301053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.301256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.301276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.301289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.304057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.313524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.313869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.313897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.313912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.314159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.314359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.314384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.314416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.317295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.326516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.326890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.326917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.326932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.327176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.327407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.327441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.327454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.330311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.339689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.340084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.340141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.340158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.340410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.340612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.340632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.340646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.343405] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.352960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.353468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.353496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.353511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.353761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.353965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.353984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.353997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.356942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.366181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.366552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.366580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.366596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.366831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.367048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.367077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.367092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.370003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.379404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.379782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.379845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.767 [2024-07-25 14:26:08.379862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.767 [2024-07-25 14:26:08.380104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.767 [2024-07-25 14:26:08.380309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.767 [2024-07-25 14:26:08.380328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.767 [2024-07-25 14:26:08.380341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.767 [2024-07-25 14:26:08.383211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.767 [2024-07-25 14:26:08.392406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.767 [2024-07-25 14:26:08.392813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.767 [2024-07-25 14:26:08.392841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.768 [2024-07-25 14:26:08.392856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.768 [2024-07-25 14:26:08.393104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.768 [2024-07-25 14:26:08.393309] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.768 [2024-07-25 14:26:08.393329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.768 [2024-07-25 14:26:08.393341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.768 [2024-07-25 14:26:08.396144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.768 [2024-07-25 14:26:08.405543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.768 [2024-07-25 14:26:08.405923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.768 [2024-07-25 14:26:08.405975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:38.768 [2024-07-25 14:26:08.405991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:38.768 [2024-07-25 14:26:08.406253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:38.768 [2024-07-25 14:26:08.406459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.768 [2024-07-25 14:26:08.406480] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.768 [2024-07-25 14:26:08.406492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.768 [2024-07-25 14:26:08.409418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.419010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.419349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.419377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.419394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.419625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.419828] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.419848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.419861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.422987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.432161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.432614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.432650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.432681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.432927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.433145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.433177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.433190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.436000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.445433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.445843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.445871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.445887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.446133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.446358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.446378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.446395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.449276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.458587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.458962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.459026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.459042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.459300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.459521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.459541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.459553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.462469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.471778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.472188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.472214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.472230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.472464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.472667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.472687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.472700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.475607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.484888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.485286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.485323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.485339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.485590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.485793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.485812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.485823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.488683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.032 [2024-07-25 14:26:08.498081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.032 [2024-07-25 14:26:08.498569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.032 [2024-07-25 14:26:08.498621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.032 [2024-07-25 14:26:08.498638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.032 [2024-07-25 14:26:08.498894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.032 [2024-07-25 14:26:08.499107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.032 [2024-07-25 14:26:08.499127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.032 [2024-07-25 14:26:08.499140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.032 [2024-07-25 14:26:08.501989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.511349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.511739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.511792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.511808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.512048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.512268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.512289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.512303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.515212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.524618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.524975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.525063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.525081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.525357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.525562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.525582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.525596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.528480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.537855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.538251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.538279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.538296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.538547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.538755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.538776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.538790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.541717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.551030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.551410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.551437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.551453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.551671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.551875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.551895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.551908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.554788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.564083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.564459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.564512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.564527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.564769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.564958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.564978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.564990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.567867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.577213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.577622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.577649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.577665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.577902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.578150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.578173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.578187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.581066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.590302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.590724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.590752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.590767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.591002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.591242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.591265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.591279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.594176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.603340] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.603713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.603741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.603756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.603974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.604226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.604248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.604261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.607155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.616414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.616770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.033 [2024-07-25 14:26:08.616798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.033 [2024-07-25 14:26:08.616813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.033 [2024-07-25 14:26:08.617049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.033 [2024-07-25 14:26:08.617273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.033 [2024-07-25 14:26:08.617294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.033 [2024-07-25 14:26:08.617307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.033 [2024-07-25 14:26:08.620195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.033 [2024-07-25 14:26:08.629438] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.033 [2024-07-25 14:26:08.629876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.034 [2024-07-25 14:26:08.629928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.034 [2024-07-25 14:26:08.629948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.034 [2024-07-25 14:26:08.630207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.034 [2024-07-25 14:26:08.630416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.034 [2024-07-25 14:26:08.630437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.034 [2024-07-25 14:26:08.630450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.034 [2024-07-25 14:26:08.633308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.034 [2024-07-25 14:26:08.642577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.034 [2024-07-25 14:26:08.642922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.034 [2024-07-25 14:26:08.642948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.034 [2024-07-25 14:26:08.642964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.034 [2024-07-25 14:26:08.643224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.034 [2024-07-25 14:26:08.643432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.034 [2024-07-25 14:26:08.643451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.034 [2024-07-25 14:26:08.643464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.034 [2024-07-25 14:26:08.646322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.034 [2024-07-25 14:26:08.655647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.034 [2024-07-25 14:26:08.656021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.034 [2024-07-25 14:26:08.656048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.034 [2024-07-25 14:26:08.656075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.034 [2024-07-25 14:26:08.656296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.034 [2024-07-25 14:26:08.656516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.034 [2024-07-25 14:26:08.656536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.034 [2024-07-25 14:26:08.656550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.034 [2024-07-25 14:26:08.659423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.034 [2024-07-25 14:26:08.668925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.034 [2024-07-25 14:26:08.669407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.034 [2024-07-25 14:26:08.669460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.034 [2024-07-25 14:26:08.669476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.034 [2024-07-25 14:26:08.669719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.034 [2024-07-25 14:26:08.669911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.034 [2024-07-25 14:26:08.669932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.034 [2024-07-25 14:26:08.669945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.034 [2024-07-25 14:26:08.672945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.312 [2024-07-25 14:26:08.682319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.312 [2024-07-25 14:26:08.682801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.312 [2024-07-25 14:26:08.682855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.312 [2024-07-25 14:26:08.682871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.312 [2024-07-25 14:26:08.683144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.312 [2024-07-25 14:26:08.683345] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.312 [2024-07-25 14:26:08.683366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.312 [2024-07-25 14:26:08.683380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.312 [2024-07-25 14:26:08.686583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.312 [2024-07-25 14:26:08.695713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.312 [2024-07-25 14:26:08.696081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.312 [2024-07-25 14:26:08.696110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.312 [2024-07-25 14:26:08.696127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.312 [2024-07-25 14:26:08.696365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.312 [2024-07-25 14:26:08.696582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.312 [2024-07-25 14:26:08.696603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.312 [2024-07-25 14:26:08.696617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.312 [2024-07-25 14:26:08.699831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.312 [2024-07-25 14:26:08.709143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.312 [2024-07-25 14:26:08.709560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.312 [2024-07-25 14:26:08.709589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.312 [2024-07-25 14:26:08.709606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.312 [2024-07-25 14:26:08.709850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.312 [2024-07-25 14:26:08.710051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.312 [2024-07-25 14:26:08.710101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.312 [2024-07-25 14:26:08.710116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.713226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.722190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.722629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.722658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.722674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.722910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.723159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.723181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.723196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.726093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.735199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.735544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.735572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.735588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.735804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.736008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.736028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.736057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.738987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.748411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.748759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.748786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.748801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.749016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.749253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.749274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.749288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.752172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.761510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.761914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.761969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.761990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.762246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.762455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.762476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.762489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.765388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.774727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.775134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.775162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.775178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.775412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.775601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.775621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.775634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.778558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.787816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.788227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.788258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.788274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.788508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.788711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.788732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.788744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.791630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.800892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.801250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.801278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.801294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.801528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.801733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.801761] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.801775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.804662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.813884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.814302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.814329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.814344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.814574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.814777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.814798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.814811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.313 [2024-07-25 14:26:08.817700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.313 [2024-07-25 14:26:08.827033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.313 [2024-07-25 14:26:08.827396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.313 [2024-07-25 14:26:08.827424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.313 [2024-07-25 14:26:08.827440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.313 [2024-07-25 14:26:08.827676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.313 [2024-07-25 14:26:08.827880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.313 [2024-07-25 14:26:08.827900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.313 [2024-07-25 14:26:08.827912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.830815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.840079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.840425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.840452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.840468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.840697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.840901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.840921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.840934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.843855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.853119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.853532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.853559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.853575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.853810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.854014] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.854034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.854047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.856924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.866258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.866605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.866632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.866648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.866884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.867115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.867138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.867151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.870001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.879296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.879641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.879667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.879682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.879898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.880129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.880151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.880164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.883015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.892365] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.892772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.892800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.892815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.893054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.893260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.893281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.893295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.896166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.905478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.905887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.905915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.905931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.906185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.906409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.906429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.906443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.909300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.918573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.918945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.918973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.918989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.919250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.919456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.919477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.919490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.922348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.931676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.932031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.932069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.932087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.932323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.932528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.932548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.932566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.935442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.314 [2024-07-25 14:26:08.944694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.314 [2024-07-25 14:26:08.945158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.314 [2024-07-25 14:26:08.945187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.314 [2024-07-25 14:26:08.945204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.314 [2024-07-25 14:26:08.945438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.314 [2024-07-25 14:26:08.945653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.314 [2024-07-25 14:26:08.945674] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.314 [2024-07-25 14:26:08.945702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.314 [2024-07-25 14:26:08.948724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:08.958222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:08.958644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:08.958673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:08.958689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:08.958940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:08.959199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:08.959221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:08.959235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:08.962150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:08.971637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:08.972039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:08.972108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:08.972125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:08.972387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:08.972577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:08.972596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:08.972608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:08.975552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:08.984681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:08.985136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:08.985168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:08.985184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:08.985431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:08.985627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:08.985648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:08.985661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:08.988562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:08.997836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:08.998215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:08.998246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:08.998263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:08.998493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:08.998718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:08.998740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:08.998753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.001981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.010934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.011282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.011311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.011328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.011591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.011785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.011806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.011820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.014656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.023944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.024381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.024425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.024441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.024676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.024885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.024905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.024918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.027876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.036960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.037382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.037411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.037428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.037664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.037870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.037891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.037904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.040807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.050069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.050418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.050447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.050463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.050700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.050904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.050925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.050938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.053859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.063167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.063518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.063557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.063573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.063808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.064023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.064057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.064082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.066982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.076281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.076654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.076683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.076698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.076917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.077132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.077152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.077164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.079956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.089545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.089892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.089920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.089936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.090170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.090393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.090415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.090443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.093307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.102549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.102858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.102884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.102900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.103130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.103339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.103360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.103374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.106246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.115679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.116025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.116053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.116110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.116364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.116569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.116589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.116602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.119474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.128725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.129098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.129125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.129140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.129357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.129561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.129582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.129594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.132516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.584 [2024-07-25 14:26:09.141781] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.584 [2024-07-25 14:26:09.142098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.584 [2024-07-25 14:26:09.142126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.584 [2024-07-25 14:26:09.142141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.584 [2024-07-25 14:26:09.142357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.584 [2024-07-25 14:26:09.142562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.584 [2024-07-25 14:26:09.142583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.584 [2024-07-25 14:26:09.142595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.584 [2024-07-25 14:26:09.145498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.154927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.155283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.155311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.155327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.155562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.155766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.155790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.155804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.158724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.168018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.168386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.168411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.168426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.168621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.168841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.168861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.168874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.172123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.181054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.181413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.181441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.181458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.181698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.181902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.181922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.181935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.184825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.194047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.194400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.194427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.194443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.194678] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.194881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.194902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.194915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.197792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.207524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.207883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.207927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.207943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.208185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.208435] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.208456] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.208469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.211610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.221039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.585 [2024-07-25 14:26:09.221390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.585 [2024-07-25 14:26:09.221418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.585 [2024-07-25 14:26:09.221433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.585 [2024-07-25 14:26:09.221656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.585 [2024-07-25 14:26:09.221883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.585 [2024-07-25 14:26:09.221905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.585 [2024-07-25 14:26:09.221919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.585 [2024-07-25 14:26:09.225159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.585 [2024-07-25 14:26:09.234690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.845 [2024-07-25 14:26:09.235908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.845 [2024-07-25 14:26:09.235940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.845 [2024-07-25 14:26:09.235957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.845 [2024-07-25 14:26:09.236209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.845 [2024-07-25 14:26:09.236451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.845 [2024-07-25 14:26:09.236471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.845 [2024-07-25 14:26:09.236484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.845 [2024-07-25 14:26:09.239595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.845 [2024-07-25 14:26:09.247983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.845 [2024-07-25 14:26:09.248320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.845 [2024-07-25 14:26:09.248350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.845 [2024-07-25 14:26:09.248372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.845 [2024-07-25 14:26:09.248603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.845 [2024-07-25 14:26:09.248832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.845 [2024-07-25 14:26:09.248854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.845 [2024-07-25 14:26:09.248867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.252253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.261424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.261834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.261862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.261878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.262143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.262363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.262386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.262414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.265417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.274683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.275162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.275191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.275207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.275461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.275666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.275685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.275698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.278678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.287816] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.288218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.288263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.288505] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.288714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.288735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.288752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.291680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.300987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.301383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.301433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.301449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.301706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.301901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.301921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.301933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.304776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.314127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.314522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.314558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.314591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.314827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.315021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.315041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.315054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.317947] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.327811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.328154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.328183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.328199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.328431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.328661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.328682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.328695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.331986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.341227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.341576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.341627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.341661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.341878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.846 [2024-07-25 14:26:09.342113] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.846 [2024-07-25 14:26:09.342136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.846 [2024-07-25 14:26:09.342151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.846 [2024-07-25 14:26:09.345177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.846 [2024-07-25 14:26:09.354599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.846 [2024-07-25 14:26:09.354991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.846 [2024-07-25 14:26:09.355019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.846 [2024-07-25 14:26:09.355034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.846 [2024-07-25 14:26:09.355275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.355493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.355514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.355527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.358601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.368016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.368386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.368414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.368430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.368665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.368859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.368880] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.368893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.371952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.381362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.381789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.381838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.381854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.382123] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.382329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.382351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.382379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.385352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.394662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.395049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.395083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.395115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.395360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.395570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.395591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.395605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.398640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.407883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.408327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.408357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.408374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.408620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.408830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.408851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.408864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.411868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.421147] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.421527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.421556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.421572] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.421818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.422027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.422072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.422093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.425096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.434411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.434793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.434821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.434836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.435067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.435274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.435295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.435308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.438280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.447736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.448142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.448172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.448189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.448433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.448628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.448648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.448662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.451660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.460940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.461345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.461388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.461405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.461623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.847 [2024-07-25 14:26:09.461831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.847 [2024-07-25 14:26:09.461851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.847 [2024-07-25 14:26:09.461865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.847 [2024-07-25 14:26:09.464863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.847 [2024-07-25 14:26:09.474149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.847 [2024-07-25 14:26:09.474594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.847 [2024-07-25 14:26:09.474627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.847 [2024-07-25 14:26:09.474644] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.847 [2024-07-25 14:26:09.474890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.848 [2024-07-25 14:26:09.475142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.848 [2024-07-25 14:26:09.475165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.848 [2024-07-25 14:26:09.475179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.848 [2024-07-25 14:26:09.478158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.848 [2024-07-25 14:26:09.487424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.848 [2024-07-25 14:26:09.487748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.848 [2024-07-25 14:26:09.487775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:39.848 [2024-07-25 14:26:09.487791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:39.848 [2024-07-25 14:26:09.488008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:39.848 [2024-07-25 14:26:09.488238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.848 [2024-07-25 14:26:09.488260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.848 [2024-07-25 14:26:09.488274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.848 [2024-07-25 14:26:09.491248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.107 [2024-07-25 14:26:09.500887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.107 [2024-07-25 14:26:09.501272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.107 [2024-07-25 14:26:09.501301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.107 [2024-07-25 14:26:09.501317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.107 [2024-07-25 14:26:09.501533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.107 [2024-07-25 14:26:09.501778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.107 [2024-07-25 14:26:09.501815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.107 [2024-07-25 14:26:09.501830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.107 [2024-07-25 14:26:09.505147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.107 [2024-07-25 14:26:09.514092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.107 [2024-07-25 14:26:09.514496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.107 [2024-07-25 14:26:09.514525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.107 [2024-07-25 14:26:09.514541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.107 [2024-07-25 14:26:09.514765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.107 [2024-07-25 14:26:09.514981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.107 [2024-07-25 14:26:09.515002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.107 [2024-07-25 14:26:09.515016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.107 [2024-07-25 14:26:09.517976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.107 [2024-07-25 14:26:09.527413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.107 [2024-07-25 14:26:09.527730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.107 [2024-07-25 14:26:09.527757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.107 [2024-07-25 14:26:09.527773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.107 [2024-07-25 14:26:09.527990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.107 [2024-07-25 14:26:09.528230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.107 [2024-07-25 14:26:09.528251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.107 [2024-07-25 14:26:09.528265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.107 [2024-07-25 14:26:09.531232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.107 [2024-07-25 14:26:09.540717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.107 [2024-07-25 14:26:09.541022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.107 [2024-07-25 14:26:09.541049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.107 [2024-07-25 14:26:09.541072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.107 [2024-07-25 14:26:09.541293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.107 [2024-07-25 14:26:09.541504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.107 [2024-07-25 14:26:09.541524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.107 [2024-07-25 14:26:09.541536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.107 [2024-07-25 14:26:09.544540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.107 [2024-07-25 14:26:09.554029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.107 [2024-07-25 14:26:09.554409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.107 [2024-07-25 14:26:09.554438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.107 [2024-07-25 14:26:09.554454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.107 [2024-07-25 14:26:09.554697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.107 [2024-07-25 14:26:09.554892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.107 [2024-07-25 14:26:09.554912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.107 [2024-07-25 14:26:09.554926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.557889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.567441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.567800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.567828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.567845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.568104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.568303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.568324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.568338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.571362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.580633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.581051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.581086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.581103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.581347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.581558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.581579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.581592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.584605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.593939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.594335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.594364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.594396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.594629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.594824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.594845] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.594858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.597921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.607247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.607687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.607716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.607737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.607983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.608229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.608252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.608267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.611238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.620512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.620833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.620860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.620876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.621126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.621335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.621356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.621370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.624344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.633834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.634187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.634216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.634231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.634473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.634683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.634703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.634716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.637721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.647199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.647637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.647664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.647680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.647917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.648154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.648179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.648194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.651191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.660485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.660907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.660937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.660954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.661206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.661420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.661442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.661455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.664381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.673693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.674109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.674138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.674155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.674398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.674592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.674613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.674626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.677706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.687088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.687446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.687475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.687491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.687737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.687939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.687961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.687975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.690970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.700304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.700744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.700773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.700790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.701033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.701281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.701305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.701320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.704319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.713601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.713952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.713981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.713997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.714265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.714480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.714501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.714514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.717473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.726896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.727291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.727319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.727335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.727558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.727768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.727789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.727802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.730785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.740202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.740577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.740605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.740621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.740868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.741121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.741145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.741161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.744148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.108 [2024-07-25 14:26:09.753431] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.108 [2024-07-25 14:26:09.753785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.108 [2024-07-25 14:26:09.753815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.108 [2024-07-25 14:26:09.753832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.108 [2024-07-25 14:26:09.754087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.108 [2024-07-25 14:26:09.754320] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.108 [2024-07-25 14:26:09.754344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.108 [2024-07-25 14:26:09.754374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.108 [2024-07-25 14:26:09.757641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.766757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.767083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.767112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.369 [2024-07-25 14:26:09.767144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.369 [2024-07-25 14:26:09.767390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.369 [2024-07-25 14:26:09.767602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.369 [2024-07-25 14:26:09.767624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.369 [2024-07-25 14:26:09.767637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.369 [2024-07-25 14:26:09.770607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.780090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.780418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.780444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.369 [2024-07-25 14:26:09.780459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.369 [2024-07-25 14:26:09.780677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.369 [2024-07-25 14:26:09.780886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.369 [2024-07-25 14:26:09.780907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.369 [2024-07-25 14:26:09.780925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.369 [2024-07-25 14:26:09.783932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.793424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.793746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.793773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.369 [2024-07-25 14:26:09.793789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.369 [2024-07-25 14:26:09.794005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.369 [2024-07-25 14:26:09.794237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.369 [2024-07-25 14:26:09.794260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.369 [2024-07-25 14:26:09.794275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.369 [2024-07-25 14:26:09.797248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.806685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.807101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.807130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.369 [2024-07-25 14:26:09.807147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.369 [2024-07-25 14:26:09.807391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.369 [2024-07-25 14:26:09.807600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.369 [2024-07-25 14:26:09.807620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.369 [2024-07-25 14:26:09.807633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.369 [2024-07-25 14:26:09.810630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.819890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.820270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.820300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.369 [2024-07-25 14:26:09.820316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.369 [2024-07-25 14:26:09.820574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.369 [2024-07-25 14:26:09.820784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.369 [2024-07-25 14:26:09.820805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.369 [2024-07-25 14:26:09.820817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.369 [2024-07-25 14:26:09.823809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.369 [2024-07-25 14:26:09.833082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.369 [2024-07-25 14:26:09.833448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.369 [2024-07-25 14:26:09.833476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.833491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.833728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.833922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.833943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.833956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.836955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.846402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.846821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.846850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.846867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.847124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.847331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.847354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.847368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.850325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.859675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.860050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.860087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.860104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.860347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.860558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.860579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.860591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.863585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.872988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.873377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.873406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.873423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.873676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.873886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.873907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.873920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.876917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.886196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.886536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.886563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.886579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.886803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.887013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.887048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.887071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.890079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.899531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.899888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.899915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.899931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.900183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.900410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.900446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.900460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.903415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.912831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.913216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.913244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.913260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.913501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.913710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.913731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.913749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.916757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.926020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.926353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.926382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.926399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.926624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.926835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.926856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.926869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.929850] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.939273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.939707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.939736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.939751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.939993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.370 [2024-07-25 14:26:09.940234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.370 [2024-07-25 14:26:09.940258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.370 [2024-07-25 14:26:09.940272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.370 [2024-07-25 14:26:09.943253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.370 [2024-07-25 14:26:09.952564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.370 [2024-07-25 14:26:09.952982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.370 [2024-07-25 14:26:09.953011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.370 [2024-07-25 14:26:09.953027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.370 [2024-07-25 14:26:09.953266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.371 [2024-07-25 14:26:09.953512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.371 [2024-07-25 14:26:09.953534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.371 [2024-07-25 14:26:09.953546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.371 [2024-07-25 14:26:09.956460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.371 [2024-07-25 14:26:09.965926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.371 [2024-07-25 14:26:09.966317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.371 [2024-07-25 14:26:09.966350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.371 [2024-07-25 14:26:09.966367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.371 [2024-07-25 14:26:09.966607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.371 [2024-07-25 14:26:09.966816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.371 [2024-07-25 14:26:09.966837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.371 [2024-07-25 14:26:09.966851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.371 [2024-07-25 14:26:09.969818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.371 [2024-07-25 14:26:09.979277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.371 [2024-07-25 14:26:09.979711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.371 [2024-07-25 14:26:09.979741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.371 [2024-07-25 14:26:09.979757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.371 [2024-07-25 14:26:09.980001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.371 [2024-07-25 14:26:09.980243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.371 [2024-07-25 14:26:09.980266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.371 [2024-07-25 14:26:09.980280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.371 [2024-07-25 14:26:09.983240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.371 [2024-07-25 14:26:09.992519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.371 [2024-07-25 14:26:09.992938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.371 [2024-07-25 14:26:09.992967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.371 [2024-07-25 14:26:09.992983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.371 [2024-07-25 14:26:09.993235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.371 [2024-07-25 14:26:09.993467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.371 [2024-07-25 14:26:09.993488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.371 [2024-07-25 14:26:09.993501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.371 [2024-07-25 14:26:09.996477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.371 [2024-07-25 14:26:10.006580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.371 [2024-07-25 14:26:10.006950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.371 [2024-07-25 14:26:10.006983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.371 [2024-07-25 14:26:10.007001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.371 [2024-07-25 14:26:10.007230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.371 [2024-07-25 14:26:10.007471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.371 [2024-07-25 14:26:10.007495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.371 [2024-07-25 14:26:10.007510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.371 [2024-07-25 14:26:10.010748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.371 [2024-07-25 14:26:10.020300] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.631 [2024-07-25 14:26:10.020758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.631 [2024-07-25 14:26:10.020790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.631 [2024-07-25 14:26:10.020807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.631 [2024-07-25 14:26:10.021040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.631 [2024-07-25 14:26:10.021282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.631 [2024-07-25 14:26:10.021307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.631 [2024-07-25 14:26:10.021323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.631 [2024-07-25 14:26:10.024448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.631 [2024-07-25 14:26:10.033629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.631 [2024-07-25 14:26:10.034048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.631 [2024-07-25 14:26:10.034085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.631 [2024-07-25 14:26:10.034103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.631 [2024-07-25 14:26:10.034347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.631 [2024-07-25 14:26:10.034558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.631 [2024-07-25 14:26:10.034580] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.631 [2024-07-25 14:26:10.034593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.631 [2024-07-25 14:26:10.037590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.631 [2024-07-25 14:26:10.047073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.631 [2024-07-25 14:26:10.047410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.631 [2024-07-25 14:26:10.047439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.631 [2024-07-25 14:26:10.047455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.631 [2024-07-25 14:26:10.047679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.631 [2024-07-25 14:26:10.047897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.631 [2024-07-25 14:26:10.047919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.631 [2024-07-25 14:26:10.047932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.631 [2024-07-25 14:26:10.050949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.631 [2024-07-25 14:26:10.060362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.631 [2024-07-25 14:26:10.060698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.631 [2024-07-25 14:26:10.060726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.631 [2024-07-25 14:26:10.060742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.631 [2024-07-25 14:26:10.060966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.631 [2024-07-25 14:26:10.061202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.631 [2024-07-25 14:26:10.061224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.631 [2024-07-25 14:26:10.061237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.064341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.073665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.074035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.074083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.074101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.074330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.074556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.074577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.074590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.077587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.086884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.087325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.087353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.087370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.087611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.087822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.087842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.087855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.090912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.100190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.100585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.100628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.100649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.100884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.101136] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.101158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.101172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.104149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.113460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.113815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.113842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.113857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.114105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.114318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.114355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.114369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.117347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.126774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.127188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.127217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.127233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.127476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.127685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.127706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.127720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.130724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.140003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.140385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.140414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.140430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.140675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.140890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.140916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.140930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.143900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.153189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.153564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.153591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.153607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.153841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.154050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.154080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.154095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.157091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.166401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.166819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.166848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.166864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.167117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.167337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.167359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.167373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.170345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.179657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.179999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.632 [2024-07-25 14:26:10.180028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.632 [2024-07-25 14:26:10.180044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.632 [2024-07-25 14:26:10.180299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.632 [2024-07-25 14:26:10.180530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.632 [2024-07-25 14:26:10.180551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.632 [2024-07-25 14:26:10.180563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.632 [2024-07-25 14:26:10.183520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.632 [2024-07-25 14:26:10.192953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.632 [2024-07-25 14:26:10.193307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.193336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.193367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.193604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.193799] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.193819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.193832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.196818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.206273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.206708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.206736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.206751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.206988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.207230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.207253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.207267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.210242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.219529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.219885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.219913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.219928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.220178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.220397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.220418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.220431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.223382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.232809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.233165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.233195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.233212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.233461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.233656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.233677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.233690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.236694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.246130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.246450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.246477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.246492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.246711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.246923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.246945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.246958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.249922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.259425] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.259756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.259785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.259801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.260026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.260272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.260294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.260307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.263604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.633 [2024-07-25 14:26:10.272763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.633 [2024-07-25 14:26:10.273128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.633 [2024-07-25 14:26:10.273157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.633 [2024-07-25 14:26:10.273174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.633 [2024-07-25 14:26:10.273403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.633 [2024-07-25 14:26:10.273616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.633 [2024-07-25 14:26:10.273637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.633 [2024-07-25 14:26:10.273655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.633 [2024-07-25 14:26:10.276672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.286207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.286707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.286761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.286776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.287023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.287257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.287279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.287293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.290423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.299465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.299867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.299921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.299937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.300190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.300391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.300412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.300426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.303361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.312583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.312930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.312958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.312973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.313241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.313466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.313487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.313499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.316420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.325709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.326069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.326097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.326112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.326347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.326551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.326571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.326584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.329382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.338805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.339221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.339249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.339266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.339502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.339705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.339725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.339738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.342649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.352083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.352509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.352536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.352552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.352787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.352991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.353010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.353022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.355970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.365276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.365693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.365731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.365762] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.366003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.366245] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.366266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.366278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.369218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.378548] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.378886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.378922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.378955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.379196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.379432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.379452] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.379465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.382752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.391687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.392127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.392154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.392170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.392412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.392606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.392625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.392637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.395538] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.404852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.405306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.405333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.405349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.405592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.405802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.405820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.405832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.408700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.418129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.418483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.418524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.418539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.418786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.418979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.418997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.419009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.421950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.431317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.431695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.431735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.431750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.431973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.432190] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.432210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.432223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.435143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.444559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.444895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.444923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.444938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.445170] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.445379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.445398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.445410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.448317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 [2024-07-25 14:26:10.457735] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.458074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.458105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.458121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.458329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.458558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.458577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.458590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 [2024-07-25 14:26:10.461496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1016189 Killed "${NVMF_APP[@]}" "$@" 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.893 [2024-07-25 14:26:10.471406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.893 [2024-07-25 14:26:10.471759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.893 [2024-07-25 14:26:10.471790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.893 [2024-07-25 14:26:10.471807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.893 [2024-07-25 14:26:10.472068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.893 [2024-07-25 14:26:10.472282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.893 [2024-07-25 14:26:10.472302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.893 [2024-07-25 14:26:10.472315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.893 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1017240 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1017240 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1017240 ']' 00:24:40.894 [2024-07-25 14:26:10.475300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.894 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:40.894 [2024-07-25 14:26:10.485078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.894 [2024-07-25 14:26:10.485533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.894 [2024-07-25 14:26:10.485580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.894 [2024-07-25 14:26:10.485597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.894 [2024-07-25 14:26:10.485836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.894 [2024-07-25 14:26:10.486070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.894 [2024-07-25 14:26:10.486092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.894 [2024-07-25 14:26:10.486106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.894 [2024-07-25 14:26:10.489326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.894 [2024-07-25 14:26:10.498414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.894 [2024-07-25 14:26:10.498787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.894 [2024-07-25 14:26:10.498815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.894 [2024-07-25 14:26:10.498831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.894 [2024-07-25 14:26:10.499086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.894 [2024-07-25 14:26:10.499321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.894 [2024-07-25 14:26:10.499342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.894 [2024-07-25 14:26:10.499356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.894 [2024-07-25 14:26:10.502416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.894 [2024-07-25 14:26:10.511743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.894 [2024-07-25 14:26:10.512134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.894 [2024-07-25 14:26:10.512162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.894 [2024-07-25 14:26:10.512178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.894 [2024-07-25 14:26:10.512408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.894 [2024-07-25 14:26:10.512624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.894 [2024-07-25 14:26:10.512643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.894 [2024-07-25 14:26:10.512655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.894 [2024-07-25 14:26:10.515983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.894 [2024-07-25 14:26:10.524116] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:40.894 [2024-07-25 14:26:10.524190] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.894 [2024-07-25 14:26:10.525066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.894 [2024-07-25 14:26:10.525411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.894 [2024-07-25 14:26:10.525437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.894 [2024-07-25 14:26:10.525460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.894 [2024-07-25 14:26:10.525662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.894 [2024-07-25 14:26:10.525890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.894 [2024-07-25 14:26:10.525909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.894 [2024-07-25 14:26:10.525922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.894 [2024-07-25 14:26:10.528874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.894 [2024-07-25 14:26:10.538215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.894 [2024-07-25 14:26:10.538586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.894 [2024-07-25 14:26:10.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:40.894 [2024-07-25 14:26:10.538642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:40.894 [2024-07-25 14:26:10.538889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:40.894 [2024-07-25 14:26:10.539285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.894 [2024-07-25 14:26:10.539304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.894 [2024-07-25 14:26:10.539331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.894 [2024-07-25 14:26:10.542492] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.551562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.551958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.551984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.552000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.552266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.552492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.552511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.552523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.555458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.154 [2024-07-25 14:26:10.564837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.565250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.565278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.565293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.565521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.565739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.565758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.565771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.568761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.578096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.578520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.578548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.578564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.578811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.579026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.579065] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.579081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.582099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.590148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:41.154 [2024-07-25 14:26:10.591496] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.591889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.591931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.591948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.592188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.592430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.592449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.592462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.595482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.604867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.605396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.605435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.605455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.605706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.605926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.605946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.605962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.608954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.618134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.618475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.618502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.618517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.618741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.618940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.618959] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.618972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.621940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.631411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.631857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.631885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.631901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.632159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.632401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.632422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.632436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.635335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.644677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.645070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.645099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.645116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.645362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.645562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.645581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.645594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.648617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.658139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.658643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.658691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.658711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.154 [2024-07-25 14:26:10.658953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.154 [2024-07-25 14:26:10.659186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.154 [2024-07-25 14:26:10.659208] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.154 [2024-07-25 14:26:10.659223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.154 [2024-07-25 14:26:10.662404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.154 [2024-07-25 14:26:10.671443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.154 [2024-07-25 14:26:10.671779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.154 [2024-07-25 14:26:10.671806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.154 [2024-07-25 14:26:10.671821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.672055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.672280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.672300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.672313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.675311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.684810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.685174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.685202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.685218] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.685449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.685674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.685693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.685707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.688713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.696792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.155 [2024-07-25 14:26:10.696841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.155 [2024-07-25 14:26:10.696854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.155 [2024-07-25 14:26:10.696865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.155 [2024-07-25 14:26:10.696874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.155 [2024-07-25 14:26:10.697159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.155 [2024-07-25 14:26:10.697186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.155 [2024-07-25 14:26:10.697189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.155 [2024-07-25 14:26:10.698237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.698604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.698632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.698647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.698863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.699128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.699152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.699166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.702343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.711989] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.712604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.712645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.712665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.712915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.713162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.713185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.713204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.716465] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.725533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.726087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.726138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.726159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.726409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.726628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.726649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.726667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.729896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.739180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.739677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.739729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.739750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.739991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.740249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.740272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.740289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.743530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.752778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.753288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.753325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.753353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.753592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.753808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.753829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.753846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.757015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.766306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.766864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.766905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.766924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.767159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.767384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.767405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.767423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.770695] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.779856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.780302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.780338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.780367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.780605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.780832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.780853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.780870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.784098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.155 [2024-07-25 14:26:10.793519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.155 [2024-07-25 14:26:10.793875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.155 [2024-07-25 14:26:10.793903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.155 [2024-07-25 14:26:10.793919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.155 [2024-07-25 14:26:10.794144] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.155 [2024-07-25 14:26:10.794377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.155 [2024-07-25 14:26:10.794398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.155 [2024-07-25 14:26:10.794412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.155 [2024-07-25 14:26:10.797607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.415 [2024-07-25 14:26:10.807123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.807459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.807486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.807502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.807717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.807935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.807956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.807970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-25 14:26:10.811276] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-25 14:26:10.820678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.821068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.821096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.821119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.821334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.821574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.821594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.821607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-25 14:26:10.824861] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 [2024-07-25 14:26:10.834247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.834615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.834643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.834658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.834873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.835132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.835154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.835168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.416 [2024-07-25 14:26:10.838398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-25 14:26:10.841460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.416 [2024-07-25 14:26:10.847650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.848066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.848095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.848110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.848325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.848563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.848583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.848596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-25 14:26:10.851758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 [2024-07-25 14:26:10.861166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.861626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.861653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.861669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.861899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.862169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.862191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.862205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-25 14:26:10.865395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-25 14:26:10.874718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.875144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.875178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.875195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.875428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.875642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.875663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.875678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 [2024-07-25 14:26:10.878944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 [2024-07-25 14:26:10.888373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.888928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.416 [2024-07-25 14:26:10.888969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.416 [2024-07-25 14:26:10.888991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.416 [2024-07-25 14:26:10.889228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.416 [2024-07-25 14:26:10.889466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.416 [2024-07-25 14:26:10.889488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.416 [2024-07-25 14:26:10.889506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.416 Malloc0 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-25 14:26:10.893092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.416 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.416 [2024-07-25 14:26:10.902135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.416 [2024-07-25 14:26:10.902574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.417 [2024-07-25 14:26:10.902604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1662ac0 with addr=10.0.0.2, port=4420 00:24:41.417 [2024-07-25 14:26:10.902621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1662ac0 is same with the state(5) to be set 00:24:41.417 [2024-07-25 14:26:10.902837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1662ac0 (9): Bad file descriptor 00:24:41.417 [2024-07-25 14:26:10.903091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.417 [2024-07-25 14:26:10.903113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.417 [2024-07-25 14:26:10.903127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.417 [2024-07-25 14:26:10.906428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.417 [2024-07-25 14:26:10.911564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.417 [2024-07-25 14:26:10.915752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.417 14:26:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1016478 00:24:41.417 [2024-07-25 14:26:10.960700] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:51.384 00:24:51.384 Latency(us) 00:24:51.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.384 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:51.384 Verification LBA range: start 0x0 length 0x4000 00:24:51.384 Nvme1n1 : 15.04 6741.50 26.33 10175.54 0.00 7524.06 564.34 42331.40 00:24:51.384 =================================================================================================================== 00:24:51.384 Total : 6741.50 26.33 10175.54 0.00 7524.06 564.34 42331.40 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:51.384 rmmod nvme_tcp 00:24:51.384 rmmod nvme_fabrics 00:24:51.384 rmmod nvme_keyring 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1017240 ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1017240 ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1017240' 00:24:51.384 killing process with pid 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1017240 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.384 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.385 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.385 14:26:20 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:53.292 00:24:53.292 real 0m22.490s 00:24:53.292 user 1m0.839s 00:24:53.292 sys 0m4.118s 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.292 ************************************ 00:24:53.292 END TEST nvmf_bdevperf 00:24:53.292 ************************************ 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.292 ************************************ 00:24:53.292 START TEST nvmf_target_disconnect 00:24:53.292 ************************************ 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:53.292 * Looking for test storage... 00:24:53.292 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.292 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.293 14:26:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:55.205 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.206 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.206 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.206 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.206 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:55.206 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:55.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:24:55.467 00:24:55.467 --- 10.0.0.2 ping statistics --- 00:24:55.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.467 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:24:55.467 00:24:55.467 --- 10.0.0.1 ping statistics --- 00:24:55.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.467 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:55.467 14:26:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.467 ************************************ 00:24:55.467 START TEST nvmf_target_disconnect_tc1 00:24:55.467 ************************************ 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:55.467 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:55.467 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.726 [2024-07-25 14:26:25.126819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:55.726 [2024-07-25 14:26:25.126906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e61a0 with addr=10.0.0.2, port=4420 00:24:55.726 [2024-07-25 14:26:25.126944] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:55.726 [2024-07-25 14:26:25.126964] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:55.726 [2024-07-25 14:26:25.126977] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:55.726 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:55.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:55.726 Initializing NVMe Controllers 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:55.726 00:24:55.726 real 0m0.099s 00:24:55.726 user 0m0.042s 00:24:55.726 sys 0m0.057s 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:55.726 ************************************ 00:24:55.726 END TEST nvmf_target_disconnect_tc1 00:24:55.726 ************************************ 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:55.726 ************************************ 00:24:55.726 START TEST nvmf_target_disconnect_tc2 00:24:55.726 ************************************ 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:55.726 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1020296 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1020296 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1020296 ']' 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:55.727 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.727 [2024-07-25 14:26:25.238819] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:24:55.727 [2024-07-25 14:26:25.238899] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.727 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.727 [2024-07-25 14:26:25.308096] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.985 [2024-07-25 14:26:25.419223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.985 [2024-07-25 14:26:25.419281] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.985 [2024-07-25 14:26:25.419309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.985 [2024-07-25 14:26:25.419320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.985 [2024-07-25 14:26:25.419330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.985 [2024-07-25 14:26:25.419684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:55.985 [2024-07-25 14:26:25.419785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:55.985 [2024-07-25 14:26:25.419883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:55.985 [2024-07-25 14:26:25.419926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 Malloc0 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 [2024-07-25 14:26:25.587172] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.985 [2024-07-25 14:26:25.615433] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.985 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1020442 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:55.986 14:26:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.245 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.163 14:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1020296 00:24:58.163 14:26:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Write completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Read completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Write completed with error (sct=0, sc=8) 00:24:58.163 starting I/O failed 00:24:58.163 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 [2024-07-25 14:26:27.641050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 [2024-07-25 14:26:27.641384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Read completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 Write completed with error (sct=0, sc=8) 00:24:58.164 starting I/O failed 00:24:58.164 [2024-07-25 14:26:27.641672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:58.164 [2024-07-25 14:26:27.641868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-25 14:26:27.641900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-25 14:26:27.642016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-25 14:26:27.642043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-25 14:26:27.642176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-25 14:26:27.642208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-25 14:26:27.642309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-25 14:26:27.642334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.164 [2024-07-25 14:26:27.642450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.164 [2024-07-25 14:26:27.642475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.164 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.642577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.642603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.642707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.642732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.642825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.642850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.642943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.642969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.643971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.643996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.644937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.644962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.645880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.645998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.165 [2024-07-25 14:26:27.646700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.165 qpair failed and we were unable to recover it. 00:24:58.165 [2024-07-25 14:26:27.646793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.646818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.646904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.646928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.647940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.647964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.648955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.648981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.649103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.649217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.649246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.649359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.649385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.649504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.649529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 [2024-07-25 14:26:27.649667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.166 [2024-07-25 14:26:27.649691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.166 qpair failed and we were unable to recover it. 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Write completed with error (sct=0, sc=8) 00:24:58.166 starting I/O failed 00:24:58.166 Read completed with error (sct=0, sc=8) 00:24:58.167 starting I/O failed 00:24:58.167 [2024-07-25 14:26:27.650000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.167 [2024-07-25 14:26:27.650093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.650913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.650938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.651864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.651983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.652960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.652999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.653921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.653946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.654054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.654085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.654178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.654203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.654295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.654320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.654398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.167 [2024-07-25 14:26:27.654423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.167 qpair failed and we were unable to recover it. 00:24:58.167 [2024-07-25 14:26:27.654519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.654545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.654663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.654688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.654763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.654788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.654873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.654898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.654984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.655895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.655921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.656914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.656939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.657877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.657994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.658019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.658149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.658175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.658294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.658320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.658437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.658463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.168 [2024-07-25 14:26:27.658551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.168 [2024-07-25 14:26:27.658576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.168 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.658714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.658741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.658858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.658883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.658963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.658988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.659854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.659880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.660962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.660990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.661123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.661149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.661267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.661292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.661447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.169 [2024-07-25 14:26:27.661472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.169 qpair failed and we were unable to recover it. 00:24:58.169 [2024-07-25 14:26:27.661591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.661615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.661703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.661852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.661877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.661971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.661997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.662899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.662925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.663900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.663938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.664956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.664981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.665134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.665243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.665385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.665527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.170 [2024-07-25 14:26:27.665646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.170 qpair failed and we were unable to recover it. 00:24:58.170 [2024-07-25 14:26:27.665768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.665798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.665917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.665955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.666875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.666902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.667895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.667920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.668890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.668984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.669954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.669980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.670098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.670126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.670244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.171 [2024-07-25 14:26:27.670269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.171 qpair failed and we were unable to recover it. 00:24:58.171 [2024-07-25 14:26:27.670389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.670414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.670528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.670553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.670666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.670691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.670786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.670812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.670903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.670928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.671939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.671966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.672850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.672876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.673893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.673919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.172 [2024-07-25 14:26:27.674718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.172 qpair failed and we were unable to recover it. 00:24:58.172 [2024-07-25 14:26:27.674799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.674827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.674956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.674995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.675890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.675917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.676839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.676868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.677917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.677942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.678911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.678938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.679052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.679084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.679232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.679258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.679355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.679380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.173 [2024-07-25 14:26:27.679498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.173 [2024-07-25 14:26:27.679523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.173 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.679670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.679695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.679818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.679844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.679957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.679982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.680953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.680978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.681882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.681906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.682967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.682991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.683080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.683106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.683200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.683227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.174 qpair failed and we were unable to recover it. 00:24:58.174 [2024-07-25 14:26:27.683323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.174 [2024-07-25 14:26:27.683361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.683512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-25 14:26:27.683538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.683654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-25 14:26:27.683679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.683799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-25 14:26:27.683824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.683922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-25 14:26:27.683961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.684082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.175 [2024-07-25 14:26:27.684109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.175 qpair failed and we were unable to recover it. 00:24:58.175 [2024-07-25 14:26:27.684221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.684248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.684397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.684422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.684512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.684538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.684647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.684672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.684825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.684853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.684974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.685974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.685999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.686909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.686933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.687943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.687968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.688117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.688299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.688422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.688532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.176 [2024-07-25 14:26:27.688653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.176 qpair failed and we were unable to recover it. 00:24:58.176 [2024-07-25 14:26:27.688738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.688763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.688874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.688899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.689890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.689915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.690969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.690994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.691829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.691961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.692849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.692976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.693013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.693134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.693161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.693282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.693307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.177 qpair failed and we were unable to recover it. 00:24:58.177 [2024-07-25 14:26:27.693443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.177 [2024-07-25 14:26:27.693496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.693641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.693666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.693749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.693773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.693891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.693917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.694889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.694985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.695964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.695989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.696902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.696940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.697898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.697986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.178 [2024-07-25 14:26:27.698011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.178 qpair failed and we were unable to recover it. 00:24:58.178 [2024-07-25 14:26:27.698105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.698245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.698405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.698519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.698674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.698871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.698897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.699939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.699964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.700955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.700980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.701874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.701902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.702049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.702083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.702166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.702191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.702307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.702333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.702420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.702445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.179 [2024-07-25 14:26:27.702563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.179 [2024-07-25 14:26:27.702588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.179 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.702789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.702855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.702997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.703879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.703904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.704890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.704979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.705955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.705980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.706951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.706976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.180 qpair failed and we were unable to recover it. 00:24:58.180 [2024-07-25 14:26:27.707094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.180 [2024-07-25 14:26:27.707121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.707881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.707993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.708931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.708956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.709953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.709979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.181 qpair failed and we were unable to recover it. 00:24:58.181 [2024-07-25 14:26:27.710673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.181 [2024-07-25 14:26:27.710699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.710816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.710842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.710936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.710963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.711901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.711927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.712953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.712977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.713960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.713990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.182 qpair failed and we were unable to recover it. 00:24:58.182 [2024-07-25 14:26:27.714912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.182 [2024-07-25 14:26:27.714937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.715952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.715979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.716821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.716848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.717957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.717983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.718962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.718992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.719098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.719136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.719233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.719261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.719373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.183 [2024-07-25 14:26:27.719399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.183 qpair failed and we were unable to recover it. 00:24:58.183 [2024-07-25 14:26:27.719517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.719544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.719637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.719663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.719784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.719810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.719953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.719979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.720940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.720967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.721914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.721939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.722881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.722980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.723862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.723888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.724007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.724032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.724135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.184 [2024-07-25 14:26:27.724163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.184 qpair failed and we were unable to recover it. 00:24:58.184 [2024-07-25 14:26:27.724290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.724415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.724535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.724676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.724824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.724930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.724956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.725906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.725931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.726936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.726963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.727999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.185 qpair failed and we were unable to recover it. 00:24:58.185 [2024-07-25 14:26:27.728964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.185 [2024-07-25 14:26:27.728990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.729856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.729882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.730963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.730989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.731872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.731909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.732865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.732890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.733007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.733031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.733155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.733183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.186 [2024-07-25 14:26:27.733332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.186 [2024-07-25 14:26:27.733358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.186 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.733481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.733506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.733596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.733623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.733732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.733757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.733877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.733902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.734961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.734987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.735869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.735894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.736933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.736971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.737865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.187 [2024-07-25 14:26:27.737903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.187 qpair failed and we were unable to recover it. 00:24:58.187 [2024-07-25 14:26:27.738055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.738912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.738941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.739893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.739990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.740899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.740991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.741963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.741988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.742904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.188 [2024-07-25 14:26:27.742991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.188 [2024-07-25 14:26:27.743016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.188 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.743870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.743974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.744899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.744923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.745925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.745950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.746939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.746965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.747066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.747093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.747210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.747236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.747346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.747372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.747491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.747517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.189 [2024-07-25 14:26:27.747660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.189 [2024-07-25 14:26:27.747684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.189 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.747792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.747817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.747933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.747958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.748970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.748994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.749925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.749952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.750881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.750999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.751858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.751887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.752018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.752057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.752163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.752190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.190 [2024-07-25 14:26:27.752280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.190 [2024-07-25 14:26:27.752305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.190 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.752383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.752408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.752557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.752599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.752681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.752706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.752792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.752816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.752922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.752947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.753943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.753980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.754905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.754988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.755952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.755980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.756102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.756128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.756219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.756244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.191 [2024-07-25 14:26:27.756354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.191 [2024-07-25 14:26:27.756380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.191 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.756528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.756553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.756669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.756694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.756788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.756813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.756931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.756957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.757874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.757912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.758966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.758991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.759923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.759948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.760904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.760997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.192 [2024-07-25 14:26:27.761023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.192 qpair failed and we were unable to recover it. 00:24:58.192 [2024-07-25 14:26:27.761138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.761955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.761993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.762960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.762990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.763925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.763953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.764901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.764926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.765034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.765187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.765321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.765436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.193 [2024-07-25 14:26:27.765578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.193 qpair failed and we were unable to recover it. 00:24:58.193 [2024-07-25 14:26:27.765722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.765747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.765861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.765886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.766887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.766925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.767967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.767992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.768898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.768923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.769838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.769967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.770135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.770308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.770487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.770668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.194 [2024-07-25 14:26:27.770769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.194 [2024-07-25 14:26:27.770795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.194 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.770937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.770963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.771916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.771942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.772912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.772998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.773947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.773973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.774935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.774959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.775034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.775068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.775157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.195 [2024-07-25 14:26:27.775183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.195 qpair failed and we were unable to recover it. 00:24:58.195 [2024-07-25 14:26:27.775291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.775429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.775568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.775681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.775809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.775939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.775966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.776919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.776944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.777836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.777988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.778881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.778996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.779020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.779145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.779171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.196 [2024-07-25 14:26:27.779266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.196 [2024-07-25 14:26:27.779292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.196 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.779407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.779431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.779612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.779659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.779857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.779921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.780882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.780907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.781957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.781982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.782972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.782997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.783909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.783996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.784021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.197 qpair failed and we were unable to recover it. 00:24:58.197 [2024-07-25 14:26:27.784225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.197 [2024-07-25 14:26:27.784252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.784420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.784566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.784676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.784792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.784911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.784996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.785834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.785861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.786863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.786986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.787896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.787920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.198 [2024-07-25 14:26:27.788879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.198 qpair failed and we were unable to recover it. 00:24:58.198 [2024-07-25 14:26:27.788961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.788986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.789911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.789936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.790849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.790888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.791906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.791930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.792896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.792935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.793073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.793101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.793221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.793247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.793331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.793356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.199 [2024-07-25 14:26:27.793449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.199 [2024-07-25 14:26:27.793475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.199 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.793596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.793623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.793736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.793761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.793904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.793928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.794878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.794978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.795956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.795983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.796917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.796944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.797895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.797923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.798052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.798084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.798199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.798224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.200 [2024-07-25 14:26:27.798336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.200 [2024-07-25 14:26:27.798360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.200 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.798447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.798498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.798658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.798697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.798826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.798871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.798961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.798987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.799121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.799147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.799265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.799291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.799376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.799401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.799506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.799557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.201 [2024-07-25 14:26:27.799663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.201 [2024-07-25 14:26:27.799715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.201 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-25 14:26:27.799855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-25 14:26:27.799882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.486 [2024-07-25 14:26:27.800001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.486 [2024-07-25 14:26:27.800026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.486 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.800915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.800940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.801869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.801978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.802961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.802985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.803914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.803958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.487 [2024-07-25 14:26:27.804781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.487 [2024-07-25 14:26:27.804806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.487 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.804888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.804915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.805877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.805902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.806917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.806942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.807958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.807983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.808889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.808914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.809031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.488 [2024-07-25 14:26:27.809056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.488 qpair failed and we were unable to recover it. 00:24:58.488 [2024-07-25 14:26:27.809219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.809245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.809356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.809382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.809492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.809542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.809656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.809701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.809846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.809871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.809983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.810867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.810986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.811905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.811989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.812957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.812983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.813948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.813974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.489 [2024-07-25 14:26:27.814092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.489 [2024-07-25 14:26:27.814118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.489 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.814861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.814889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.815941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.815980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.816878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.816976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.817867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.817988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.818013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.818103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.818129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.490 qpair failed and we were unable to recover it. 00:24:58.490 [2024-07-25 14:26:27.818247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.490 [2024-07-25 14:26:27.818275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.818365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.818390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.818555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.818603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.818799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.818849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.818966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.818990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.819938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.819962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.820887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.820995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.821859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.821968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.491 [2024-07-25 14:26:27.822808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.491 qpair failed and we were unable to recover it. 00:24:58.491 [2024-07-25 14:26:27.822899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.822930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.823924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.823950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.824898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.824983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.825132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.825275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.825492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.825649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.825866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.825904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.826880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.826999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.827027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.492 [2024-07-25 14:26:27.827124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.492 [2024-07-25 14:26:27.827150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.492 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.827924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.828925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.828951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.829102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.829238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.829409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.829609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.829772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.829978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.830180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.830349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.830522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.830688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.830897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.830936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.493 [2024-07-25 14:26:27.831956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.493 [2024-07-25 14:26:27.831983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.493 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.832155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.832341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.832468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.832697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.832894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.832986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.833902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.833928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.834951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.834980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.835923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.835950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.836120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.836260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.836401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.836538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.836744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.836972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.837010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.494 qpair failed and we were unable to recover it. 00:24:58.494 [2024-07-25 14:26:27.837155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.494 [2024-07-25 14:26:27.837182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.837280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.837307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.837397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.837423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.837567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.837592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.837741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.837794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.837914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.837939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.838901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.838927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.839884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.839909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.840860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.840975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.841000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.841123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.841149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.841242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.841267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.841358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.841382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.841502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.495 [2024-07-25 14:26:27.841528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.495 qpair failed and we were unable to recover it. 00:24:58.495 [2024-07-25 14:26:27.841652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.841677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.841792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.841817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.841903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.841928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.842918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.842964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.843089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.843116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.843246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.843284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.843443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.843488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.843692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.843747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.843966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.844788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.844980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.845205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.845440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.845586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.845699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.845881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.845930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.846870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.846895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.847007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.847031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.496 [2024-07-25 14:26:27.847127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.496 [2024-07-25 14:26:27.847153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.496 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.847265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.847290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.847442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.847672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.847718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.847809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.847834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.847922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.847946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.848932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.848958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.849912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.849990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.850954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.850979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.851096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.851124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.851240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.851265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.851401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.497 [2024-07-25 14:26:27.851439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.497 qpair failed and we were unable to recover it. 00:24:58.497 [2024-07-25 14:26:27.851561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.851587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.851733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.851759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.851846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.851872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.851962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.851989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.852128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a230 is same with the state(5) to be set 00:24:58.498 [2024-07-25 14:26:27.852273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.852311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.852462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.852489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.852669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.852720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.852833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.852858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.852951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.852978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.853913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.853938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.854954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.854982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.855783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.855975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.856013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.856140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.498 [2024-07-25 14:26:27.856166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.498 qpair failed and we were unable to recover it. 00:24:58.498 [2024-07-25 14:26:27.856307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.856450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.856557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.856669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.856789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.856926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.856951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.857930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.857955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.858904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.858943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.859831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.859857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.860931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.860956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.861051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.499 [2024-07-25 14:26:27.861094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.499 qpair failed and we were unable to recover it. 00:24:58.499 [2024-07-25 14:26:27.861211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.861381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.861523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.861665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.861784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.861920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.861945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.862972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.862999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.863908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.863934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.864924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.864978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.865907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.865934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.866052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.500 [2024-07-25 14:26:27.866096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.500 qpair failed and we were unable to recover it. 00:24:58.500 [2024-07-25 14:26:27.866186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.866214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.866307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.866333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.866418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.866444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.866569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.866594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.866748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.866788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.866986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.867952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.867980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.868839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.868885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.869843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.869871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.870833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.870881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.871032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.871057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.871160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.871185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.501 [2024-07-25 14:26:27.871276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.501 [2024-07-25 14:26:27.871302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.501 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.871434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.871473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.871643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.871691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.871841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.871869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.871990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.872874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.872912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.873008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.873034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.873169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.873208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.873360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.873388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.873523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.873588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.873843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.873910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.874824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.874862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.875885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.502 [2024-07-25 14:26:27.875910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.502 qpair failed and we were unable to recover it. 00:24:58.502 [2024-07-25 14:26:27.876007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.876046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.876183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.876210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.876301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.876329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.876516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.876555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.876726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.876764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.876972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.877167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.877335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.877476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.877637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.877822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.877869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.878933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.878999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.879968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.879993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.880846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.880884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.881032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.881197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.881307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.881461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.503 [2024-07-25 14:26:27.881576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.503 qpair failed and we were unable to recover it. 00:24:58.503 [2024-07-25 14:26:27.881742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.881782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.881941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.881971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.882897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.882922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.883866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.883903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.884908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.884957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.885903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.885942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.886035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.886066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.886165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.886190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.886275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.886301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.886382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.886407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.504 [2024-07-25 14:26:27.886552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.504 [2024-07-25 14:26:27.886591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.504 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.886755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.886794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.886976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.887899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.887994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.888853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.888878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.889937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.889962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.890857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.890984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.891022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.891155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.891185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.891308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.891340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.891529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.891568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.891706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.505 [2024-07-25 14:26:27.891758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.505 qpair failed and we were unable to recover it. 00:24:58.505 [2024-07-25 14:26:27.891904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.891929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.892881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.892994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.893951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.893977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.894944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.894969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.895893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.895977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.896130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.896235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.896453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.896602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.506 qpair failed and we were unable to recover it. 00:24:58.506 [2024-07-25 14:26:27.896744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.506 [2024-07-25 14:26:27.896770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.896918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.896944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.897921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.897946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.898869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.898907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.899936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.899962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.507 [2024-07-25 14:26:27.900837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.507 qpair failed and we were unable to recover it. 00:24:58.507 [2024-07-25 14:26:27.900950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.900980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.901929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.901954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.902920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.902944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.903900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.903992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.904845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.904984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.905184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.905303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.905424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.905558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.508 qpair failed and we were unable to recover it. 00:24:58.508 [2024-07-25 14:26:27.905703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.508 [2024-07-25 14:26:27.905728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.905820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.905844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.905963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.905989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.906968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.906994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.907940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.907966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.908849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.908979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.909132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.909268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.909474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.909639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.909839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.909885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.910049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.910170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.910355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.910520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.509 [2024-07-25 14:26:27.910634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.509 qpair failed and we were unable to recover it. 00:24:58.509 [2024-07-25 14:26:27.910744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.910769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.910861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.910889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.911029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.911178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.911294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.911501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.911756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.911935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.912213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.912357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.912524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.912766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.912932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.912979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.913920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.913945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.914967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.914992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.510 [2024-07-25 14:26:27.915909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.510 [2024-07-25 14:26:27.915948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.510 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.916972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.916998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.917887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.917912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.918899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.918924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.919931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.919956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.920898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.511 [2024-07-25 14:26:27.920924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.511 qpair failed and we were unable to recover it. 00:24:58.511 [2024-07-25 14:26:27.921025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.921201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.921339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.921482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.921606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.921781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.921820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.922012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.922051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.922205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.922231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.922364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.922402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.922624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.922662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.922883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.922943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.923943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.923969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.924917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.924943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.925043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.925091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.925197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.925225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.925316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.925342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.925493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.512 [2024-07-25 14:26:27.925540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.512 qpair failed and we were unable to recover it. 00:24:58.512 [2024-07-25 14:26:27.925691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.925739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.925880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.925923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.926873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.926898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.927933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.927962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.928932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.928961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.929890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.929982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.513 [2024-07-25 14:26:27.930749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.513 [2024-07-25 14:26:27.930774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.513 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.930866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.930894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.931881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.931909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.932948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.932974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.933855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.933975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.934950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.934976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.935071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.935098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.935179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.935205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.935302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.935327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.514 [2024-07-25 14:26:27.935470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.514 [2024-07-25 14:26:27.935495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.514 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.935585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.935611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.935701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.935726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.935827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.935855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.935974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.935999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.936163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.936203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.936414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.936452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.936611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.936655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.936782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.936820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.936969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.936995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.937954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.937978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.938925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.938950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.939876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.939993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.940018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.940127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.940153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.940252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.940277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.940418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.940443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.515 [2024-07-25 14:26:27.940536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.515 [2024-07-25 14:26:27.940566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.515 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.940688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.940715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.940832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.940857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.940945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.940970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.941929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.941957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.942949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.942974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.943881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.943917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.944887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.944915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.945851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.945914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.946068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.946095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.946215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.516 [2024-07-25 14:26:27.946241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.516 qpair failed and we were unable to recover it. 00:24:58.516 [2024-07-25 14:26:27.946389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.946436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.946558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.946607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.946813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.946866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.946984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.947969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.947994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.948873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.949859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.949886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.950044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.950212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.950411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.950615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.950784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.950963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.951001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.951173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.951205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.951293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.951318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.951546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.951583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.517 qpair failed and we were unable to recover it. 00:24:58.517 [2024-07-25 14:26:27.951749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.517 [2024-07-25 14:26:27.951785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.951940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.951976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.952922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.952958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.953903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.953998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.954838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.954863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.955914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.955939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.956858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.956895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.957012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.957039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.957199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.957227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.518 [2024-07-25 14:26:27.957318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.518 [2024-07-25 14:26:27.957344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.518 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.957444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.957471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.957587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.957613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.957701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.957728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.957857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.957895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.957987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.958906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.958931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.959920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.959945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.960934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.960959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.961916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.961964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.962057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.962088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.962207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.962233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.962350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.962375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.519 qpair failed and we were unable to recover it. 00:24:58.519 [2024-07-25 14:26:27.962493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.519 [2024-07-25 14:26:27.962520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.962641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.962667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.962816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.962854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.963902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.963938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.964151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.964298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.964463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.964657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.964857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.964978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.965126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.965247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.965427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.965650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.965900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.965925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.966901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.966992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.967873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.967898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.968011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.968036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.968146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.968171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.968287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.968312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.520 [2024-07-25 14:26:27.968432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.520 [2024-07-25 14:26:27.968457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.520 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.968554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.968579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.968671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.968696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.968777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.968802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.968896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.968922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.969867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.969895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.970877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.970902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.971884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.971997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.972147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.972299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.972414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.972658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.972856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.972890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.973908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.973933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.974051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.974085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.974235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.974260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.974386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.974412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.974505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.974530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.521 qpair failed and we were unable to recover it. 00:24:58.521 [2024-07-25 14:26:27.974653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.521 [2024-07-25 14:26:27.974679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.974859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.974892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.975852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.975893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.976960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.976994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.977187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.977214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.977302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.977349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.977502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.977528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.977647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.977672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.977939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.978938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.978965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.979907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.979997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.980023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.980142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.980169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.980282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.980308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.980445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.980471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.522 qpair failed and we were unable to recover it. 00:24:58.522 [2024-07-25 14:26:27.980585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.522 [2024-07-25 14:26:27.980611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.980719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.980745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.980864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.980890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.981877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.981902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.982947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.982975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.983968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.983995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.984106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.984132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.984326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.984351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.984497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.984549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.984753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.984799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.984899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.984924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.985869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.985894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.986005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.986030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.986132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.986160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.986241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.986266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.986362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.523 [2024-07-25 14:26:27.986388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.523 qpair failed and we were unable to recover it. 00:24:58.523 [2024-07-25 14:26:27.986504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.986530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.986618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.986643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.986728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.986753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.986865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.986890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.987874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.987901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.988940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.988966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.989820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.989971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.990971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.990997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.991863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.991892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.992034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.992066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.992161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.992186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.524 qpair failed and we were unable to recover it. 00:24:58.524 [2024-07-25 14:26:27.992299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.524 [2024-07-25 14:26:27.992333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.992474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.992507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.992649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.992683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.992869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.992902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.993878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.993912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.994910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.994943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.995949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.995974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.996939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.996978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.997937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.997962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.525 [2024-07-25 14:26:27.998935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.525 [2024-07-25 14:26:27.998959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.525 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:27.999853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:27.999878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.000874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.000907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.001880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.001980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.002902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.002929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.003883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.003913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.004023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.004048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.004212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.004249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.526 qpair failed and we were unable to recover it. 00:24:58.526 [2024-07-25 14:26:28.004374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.526 [2024-07-25 14:26:28.004400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.004502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.004534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.004640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.004666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.004808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.004834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.004955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.004981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.005960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.005985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.006881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.006906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.007900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.007939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.008905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.008931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.009905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.009933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.010026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.010054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.010182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.010208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.527 qpair failed and we were unable to recover it. 00:24:58.527 [2024-07-25 14:26:28.010338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.527 [2024-07-25 14:26:28.010364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.010457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.010482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.010603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.010629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.010748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.010774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.010860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.010887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.011973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.011999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.012970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.012996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.013952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.013978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.014880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.014911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.015899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.015926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.016893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.528 [2024-07-25 14:26:28.016918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.528 qpair failed and we were unable to recover it. 00:24:58.528 [2024-07-25 14:26:28.017014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.017149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.017287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.017491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.017661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.017829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.017860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.018881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.018978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.019888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.019917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.020956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.020982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.021950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.021977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.022945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.022970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.023079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.023119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.023210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.023235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.529 [2024-07-25 14:26:28.023347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.529 [2024-07-25 14:26:28.023372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.529 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.023483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.023508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.023595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.023737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.023762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.023849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.023873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.023980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.024893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.024920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.025837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.025975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.026921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.026952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.027884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.027910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.028949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.028974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.029093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.029119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.029227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.530 [2024-07-25 14:26:28.029253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.530 qpair failed and we were unable to recover it. 00:24:58.530 [2024-07-25 14:26:28.029340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.029481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.029621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.029738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.029851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.029970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.029995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.030819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.030982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.031859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.031887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.032908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.032935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.033907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.033998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.034898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.034922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.035947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.035974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.531 [2024-07-25 14:26:28.036070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.531 [2024-07-25 14:26:28.036103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.531 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.036899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.036923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.037886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.037996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.038911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.038936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.039899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.039924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.040798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.040998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.041893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.041938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.042905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.042930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.043028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.043055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.043171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.043201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.532 [2024-07-25 14:26:28.043345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.532 [2024-07-25 14:26:28.043377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.532 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.043509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.043539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.043643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.043673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.043831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.043860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.043977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.044890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.044974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.045896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.045923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.046889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.046977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.047902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.047989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.048876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.048992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.533 [2024-07-25 14:26:28.049837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.533 [2024-07-25 14:26:28.049862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.533 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.049985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.050818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.050857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.051912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.051937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.052906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.052932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.053927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.053951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.534 qpair failed and we were unable to recover it. 00:24:58.534 [2024-07-25 14:26:28.054080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.534 [2024-07-25 14:26:28.054108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.054197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.054223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.054348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.054374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.054513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.054565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.054725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.054768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.054898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.054923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.055869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.055997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.056907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.056933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.057892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.057985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.058875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.535 [2024-07-25 14:26:28.058991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.535 [2024-07-25 14:26:28.059016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.535 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.059970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.059994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.060905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.060930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.061838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.061983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.062940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.062965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.063888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.063912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.536 qpair failed and we were unable to recover it. 00:24:58.536 [2024-07-25 14:26:28.064007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.536 [2024-07-25 14:26:28.064035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.064959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.064985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.065891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.065935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.066916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.066943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.067891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.067917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.068900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.068928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.069046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.537 [2024-07-25 14:26:28.069077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.537 qpair failed and we were unable to recover it. 00:24:58.537 [2024-07-25 14:26:28.069194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.069219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.069304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.069328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.069467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.069516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.069685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.069713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.069853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.069879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.069998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.070893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.070995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.071866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.071893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.072958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.072984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.073874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.073912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.074071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.074099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.074188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.538 [2024-07-25 14:26:28.074214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.538 qpair failed and we were unable to recover it. 00:24:58.538 [2024-07-25 14:26:28.074354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.074379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.074473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.074498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.074608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.074633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.074716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.074740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.074891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.074916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.075870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.075986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.076936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.076961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.077927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.077952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.078038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.078069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.078213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.078238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.078346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.539 [2024-07-25 14:26:28.078372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.539 qpair failed and we were unable to recover it. 00:24:58.539 [2024-07-25 14:26:28.078494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.078522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.078611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.078640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.078737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.078763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.078891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.078916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.079876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.079921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.080924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.080950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.081903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.081989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.082931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.082957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.083052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.083101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.540 qpair failed and we were unable to recover it. 00:24:58.540 [2024-07-25 14:26:28.083217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.540 [2024-07-25 14:26:28.083243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.083348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.083374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.083536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.083562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.083679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.083704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.083797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.083837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.083926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.083951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.084912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.084937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.085876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.085902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.086868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.086979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.087923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.087948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.541 [2024-07-25 14:26:28.088069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.541 [2024-07-25 14:26:28.088094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.541 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.088923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.088948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.089896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.089923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.090902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.090928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.091916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.091996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.542 [2024-07-25 14:26:28.092708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.542 qpair failed and we were unable to recover it. 00:24:58.542 [2024-07-25 14:26:28.092824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.092851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.092965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.092991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.093862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.093985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.094896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.094999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.095951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.095981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.096867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.096984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.097009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.097095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.097121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.097238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.097264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.543 [2024-07-25 14:26:28.097344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.543 [2024-07-25 14:26:28.097369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.543 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.097455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.097480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.097621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.097646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.097763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.097789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.097912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.097937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.098853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.098976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.099923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.099948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.544 [2024-07-25 14:26:28.100828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.544 qpair failed and we were unable to recover it. 00:24:58.544 [2024-07-25 14:26:28.100951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.100978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.101918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.101945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.102914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.102939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.103872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.103898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.104047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.545 [2024-07-25 14:26:28.104080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.545 qpair failed and we were unable to recover it. 00:24:58.545 [2024-07-25 14:26:28.104192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.104304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.104476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.104592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.104699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.104923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.104949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.105893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.105919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.106921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.106948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.107080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.107107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.107199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.107226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.546 [2024-07-25 14:26:28.107322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.546 [2024-07-25 14:26:28.107348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.546 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.107435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.107460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.107544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.107570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.107665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.107692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.107796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.107835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.107950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.107976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.108891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.108917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.109973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.109997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.110112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.110139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.110227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.110251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.547 qpair failed and we were unable to recover it. 00:24:58.547 [2024-07-25 14:26:28.110333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.547 [2024-07-25 14:26:28.110357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.110439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.110463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.110543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.110567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.110657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.110681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.110768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.110796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.110893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.110919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.111887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.111996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.112889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.548 [2024-07-25 14:26:28.112915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.548 qpair failed and we were unable to recover it. 00:24:58.548 [2024-07-25 14:26:28.113067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.833 [2024-07-25 14:26:28.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.833 qpair failed and we were unable to recover it. 00:24:58.833 [2024-07-25 14:26:28.113814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.113844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.113929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.113954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.114894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.114921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.115906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.115993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.116912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.116936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.834 [2024-07-25 14:26:28.117864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.834 [2024-07-25 14:26:28.117889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.834 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.117976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.118903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.118994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.119895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.119989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.120869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.120984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.121906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.121931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.122023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.122047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.122176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.835 [2024-07-25 14:26:28.122200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.835 qpair failed and we were unable to recover it. 00:24:58.835 [2024-07-25 14:26:28.122315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.122339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.122457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.122481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.122625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.122650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.122774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.122799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.122912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.122936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.123965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.123991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.124837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.124864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.125894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.125918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.836 [2024-07-25 14:26:28.126693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.836 qpair failed and we were unable to recover it. 00:24:58.836 [2024-07-25 14:26:28.126815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.126840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.126993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.127876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.127988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.128884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.128908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.129735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.129763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.130959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.130985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.131109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.131150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.131302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.131329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.131481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.131512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.131643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.131673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.131826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.131861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.132026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.837 [2024-07-25 14:26:28.132073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.837 qpair failed and we were unable to recover it. 00:24:58.837 [2024-07-25 14:26:28.132181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.132331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.132478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.132646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.132766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.132909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.132935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.133901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.133940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.134896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.134921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.135851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.135991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.136883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.136982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.137009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.137146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.838 [2024-07-25 14:26:28.137191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.838 qpair failed and we were unable to recover it. 00:24:58.838 [2024-07-25 14:26:28.137381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.137417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.137598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.137633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.137740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.137774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.137916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.137941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.138945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.138973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.139904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.139932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.140904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.140930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.141042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.141241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.141386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.141544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.839 [2024-07-25 14:26:28.141709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.839 qpair failed and we were unable to recover it. 00:24:58.839 [2024-07-25 14:26:28.141801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.141825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.141967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.141992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.142909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.142992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.143918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.143943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.144834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.144983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.145916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.145942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.146022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.146047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.146166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.146193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.840 [2024-07-25 14:26:28.146302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.840 [2024-07-25 14:26:28.146327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.840 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.146414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.146439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.146525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.146550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.146668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.146694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.146787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.146815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.146945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.146983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.147886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.147924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.148868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.148893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.149906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.149990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.150015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.150136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.150277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.150303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.841 qpair failed and we were unable to recover it. 00:24:58.841 [2024-07-25 14:26:28.150422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.841 [2024-07-25 14:26:28.150446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.150572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.150597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.150682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.150713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.150827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.150851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.150936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.150960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.151891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.151919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.152903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.152989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.153841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.153966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.154855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.154890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.155089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.155150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.155242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.842 [2024-07-25 14:26:28.155269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.842 qpair failed and we were unable to recover it. 00:24:58.842 [2024-07-25 14:26:28.155428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.155465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.155610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.155647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.155832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.155869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.156962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.156991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.157107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.157132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.157244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.157269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.157445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.157481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.157613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.157663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.157817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.157857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.158866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.158970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.159160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.159268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.159483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.159677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.159871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.159907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.843 qpair failed and we were unable to recover it. 00:24:58.843 [2024-07-25 14:26:28.160935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.843 [2024-07-25 14:26:28.160971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.161102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.161146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.161248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.161287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.161396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.161433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.161627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.161677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.161824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.161873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.162957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.162981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.163140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.163173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.163339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.163384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.163558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.163590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.163721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.163752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.163903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.163934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.164101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.164281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.164408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.164572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.164820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.164980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.165008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.165120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.165164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.165269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.165298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.165430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.165459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.165596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.844 [2024-07-25 14:26:28.165625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.844 qpair failed and we were unable to recover it. 00:24:58.844 [2024-07-25 14:26:28.165751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.165780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.165911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.165942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.166961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.166985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.167853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.167974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.168863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.168976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.169884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.169909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.170045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.170095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.170226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.170257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.170414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.170449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.170633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.170669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.170850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.845 [2024-07-25 14:26:28.170903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.845 qpair failed and we were unable to recover it. 00:24:58.845 [2024-07-25 14:26:28.171021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.171891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.171916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.172969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.172999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.173183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.173230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.173335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.173364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.173519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.173562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.173703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.173732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.173865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.173889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.174917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.174942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.175070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.175098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.846 qpair failed and we were unable to recover it. 00:24:58.846 [2024-07-25 14:26:28.175193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.846 [2024-07-25 14:26:28.175218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.175398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.175434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.175595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.175651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.175796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.175843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.175927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.175951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.176927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.176951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.177889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.177977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.847 [2024-07-25 14:26:28.178807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.847 qpair failed and we were unable to recover it. 00:24:58.847 [2024-07-25 14:26:28.178924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.178948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.179934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.179974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.180150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.180361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.180560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.180728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.180868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.180988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.181920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.181945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.848 [2024-07-25 14:26:28.182797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.848 [2024-07-25 14:26:28.182828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.848 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.182964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.182993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.183138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.183163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.183301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.183326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.183469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.183497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.183689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.183719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.183849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.183880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.184933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.184979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.185109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.185138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.185239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.185270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.185418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.185456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.185586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.185635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.185800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.185840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.186005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.186036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.186216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.186242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.186330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.186587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.186625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.186803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.186841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.187002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.187028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.849 qpair failed and we were unable to recover it. 00:24:58.849 [2024-07-25 14:26:28.187159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.849 [2024-07-25 14:26:28.187185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.187274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.187305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.187448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.187474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.187657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.187702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.187894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.187924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.188822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.188859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.189019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.189069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.189224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.189251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.189364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.189389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.189552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.189582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.189825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.189865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.850 qpair failed and we were unable to recover it. 00:24:58.850 [2024-07-25 14:26:28.190955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.850 [2024-07-25 14:26:28.190994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.191127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.191153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.191299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.191324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.191451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.191481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.191640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.191669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.191845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.191894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.192875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.192900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.193010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.193035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.193179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.193212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.193328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.193353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.193561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.193591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.193792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.193829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.194017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.194055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.194177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.194204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.194342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.194373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.194474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.194506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.851 [2024-07-25 14:26:28.194711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.851 [2024-07-25 14:26:28.194777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.851 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.194977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.195931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.195956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.196884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.196912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.197836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.197874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.852 qpair failed and we were unable to recover it. 00:24:58.852 [2024-07-25 14:26:28.198950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.852 [2024-07-25 14:26:28.198990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.199126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.199157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.199272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.199297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.199426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.199465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.199658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.199696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.199861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.199898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.200098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.200239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.200373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.200625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.200825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.200987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.201025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.201210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.201235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.201409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.201447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.201583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.201620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.201792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.201831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.202117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.202254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.202419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.202582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.202813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.202989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.203020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.203142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.203283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.203309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.203422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.853 [2024-07-25 14:26:28.203447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.853 qpair failed and we were unable to recover it. 00:24:58.853 [2024-07-25 14:26:28.203558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.203583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.203750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.203789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.204881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.204918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.205098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.205124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.205236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.205260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.205420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.205460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.205697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.205727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.205890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.205920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.206131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.206171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.206348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.206388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.206572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.206783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.206826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.206975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.207171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.207304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.207513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.207743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.207907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.207946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.208115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.208154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.854 [2024-07-25 14:26:28.208323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.854 [2024-07-25 14:26:28.208354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.854 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.208518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.208547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.208757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.208821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.209957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.209996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.210175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.210215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.210418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.210459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.210594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.210634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.210799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.210841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.211913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.211953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.212161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.212202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.212404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.212444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.212610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.212641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.212744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.212773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.212928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.212968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.213116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.855 [2024-07-25 14:26:28.213155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.855 qpair failed and we were unable to recover it. 00:24:58.855 [2024-07-25 14:26:28.213282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.213322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.213480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.213520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.213678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.213717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.213882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.213921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.214086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.214126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.214292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.214492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.214532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.214672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.214721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.214865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.214905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.215078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.215121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.215281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.215321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.215460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.215498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.215710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.215740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.215842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.215873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.216043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.216095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.216238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.216279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.216446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.216485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.216622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.216662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.216836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.216875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.217047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.217099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.217302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.217342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.217547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.217587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.217791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.217829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.217958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.217998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.218165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.856 [2024-07-25 14:26:28.218210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.856 qpair failed and we were unable to recover it. 00:24:58.856 [2024-07-25 14:26:28.218381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.218424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.218642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.218684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.218854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.218896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.219106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.219150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.219341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.219380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.219513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.219553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.219723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.219764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.219936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.219976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.220143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.220185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.220356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.220395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.220603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.220642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.220779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.220818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.221015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.221054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.221264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.221488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.221532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.221684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.221724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.221874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.221915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.222093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.222138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.222242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.222272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.222445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.222486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.222661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.222701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.222876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.222918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.223092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.223143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.223316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.857 [2024-07-25 14:26:28.223358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.857 qpair failed and we were unable to recover it. 00:24:58.857 [2024-07-25 14:26:28.223499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.223543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.223697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.223739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.223910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.223952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.224128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.224171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.224303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.224345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.224488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.224530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.224739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.224781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.224993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.225036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.225257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.225300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.225485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.225516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.225653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.225683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.225845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.225895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.226114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.226158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.226368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.226411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.226536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.226578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.226758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.226802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.226983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.227025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.227252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.227298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.858 qpair failed and we were unable to recover it. 00:24:58.858 [2024-07-25 14:26:28.227437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.858 [2024-07-25 14:26:28.227470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.227609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.227638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.227784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.227816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.227987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.228122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.228302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.228443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.228717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.228907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.228954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.229140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.229183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.229366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.229408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.229581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.229623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.229795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.229837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.229970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.230014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.230233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.230278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.230456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.230499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.230715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.230757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.230930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.230974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.231146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.231189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.231426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.231502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.231749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.231822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.231991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.232081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.232321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.232378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.232684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.232746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.232997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.859 [2024-07-25 14:26:28.233093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.859 qpair failed and we were unable to recover it. 00:24:58.859 [2024-07-25 14:26:28.233246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.233306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.233533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.233596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.233873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.233934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.234133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.234193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.234418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.234476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.234643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.234721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.234999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.235075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.235323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.235380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.235570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.235634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.235885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.235948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.236203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.236264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.236551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.236614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.236930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.236993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.237236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.237294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.237564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.237626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.237791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.237864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.238057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.238117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.238272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.238313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.238522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.238563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.238802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.238846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.239075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.239119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.239338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.239382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.239572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.239618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.239800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.860 [2024-07-25 14:26:28.239843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.860 qpair failed and we were unable to recover it. 00:24:58.860 [2024-07-25 14:26:28.240027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.240082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.240226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.240270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.240446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.240488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.240640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.240684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.240838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.240883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.241027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.241081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.241243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.241285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.241485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.241527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.241709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.241753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.241945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.241989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.242170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.242215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.242410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.242466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.242693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.242737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.242915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.242959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.243142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.243189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.243368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.243410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.243636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.243680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.243858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.243900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.244117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.244161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.244347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.244388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.244575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.244620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.244848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.244892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.245043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.245097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.245253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.245298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.245516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.861 [2024-07-25 14:26:28.245560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.861 qpair failed and we were unable to recover it. 00:24:58.861 [2024-07-25 14:26:28.245728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.245773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.245962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.246006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.246176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.246219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.246363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.246408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.246560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.246604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.246759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.246805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.247031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.247089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.247272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.247320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.247549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.247596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.247799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.247845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.248038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.248101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.248331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.248377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.248544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.248593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.248799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.248847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.249036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.249096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.249328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.249374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.249569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.249616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.249806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.249854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.250099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.250147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.250344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.250392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.250576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.250622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.250812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.250858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.251086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.251135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.251337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.251385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.862 [2024-07-25 14:26:28.251550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.862 [2024-07-25 14:26:28.251596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.862 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.251787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.251832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.251981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.252036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.252280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.252327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.252499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.252545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.252733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.252782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.253010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.253057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.253246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.253293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.253483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.253530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.253725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.253772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.253934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.253981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.254186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.254234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.254378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.254424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.254610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.254657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.254839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.254885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.255126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.255175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.255416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.255464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.255663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.255710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.255860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.255906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.256140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.256188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.256378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.256425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.256656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.256703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.256873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.256920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.257113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.257160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.257351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.863 [2024-07-25 14:26:28.257400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.863 qpair failed and we were unable to recover it. 00:24:58.863 [2024-07-25 14:26:28.257562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.257609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.257817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.257869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.258116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.258167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.258328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.258379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.258617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.258668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.258877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.258927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.259171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.259223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.259413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.259462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.259671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.259723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.259903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.259953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.260200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.260250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.260448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.260498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.260720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.260770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.260972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.261245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.261295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.261481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.261530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.261739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.261788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.262054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.262157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.262367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.262417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.262654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.262703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.262900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.262950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.263184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.263236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.263445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.263495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.263671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.263723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.864 [2024-07-25 14:26:28.263966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.864 [2024-07-25 14:26:28.264016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.864 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.264279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.264330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.264508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.264557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.264759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.264809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.264982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.265032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.265291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.265340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.265532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.265581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.265826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.265877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.266106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.266160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.266370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.266420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.266627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.266676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.266919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.266968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.267183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.267235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.267406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.267455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.267617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.267669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.267858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.267908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.268083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.268133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.268301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.268349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.865 [2024-07-25 14:26:28.268543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.865 [2024-07-25 14:26:28.268591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.865 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.268800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.268847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.269085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.269135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.269325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.269374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.269549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.269599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.269782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.269830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.270034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.270106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.270360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.270409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.270564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.270612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.270814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.270864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.271105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.271157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.271392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.271441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.271695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.271748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.271976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.272028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.272208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.272261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.272446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.272506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.272726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.272779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.273002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.273071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.273277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.273332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.273513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.273566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.273821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.273874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.274106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.274161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.274400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.274454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.274709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.274763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.275085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.866 [2024-07-25 14:26:28.275155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.866 qpair failed and we were unable to recover it. 00:24:58.866 [2024-07-25 14:26:28.275366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.275417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.275662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.275712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.275932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.275988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.276260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.276315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.276506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.276560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.276778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.276832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.277018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.277087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.277318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.277372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.277548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.277604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.277847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.277901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.278108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.278164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.278340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.278394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.278579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.278632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.278903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.278957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.279133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.279187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.279417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.279473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.279690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.279743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.280033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.280128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.280379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.280433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.280607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.280662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.280892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.280946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.281171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.281227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.281450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.281503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.281697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.281750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.281961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.867 [2024-07-25 14:26:28.282014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.867 qpair failed and we were unable to recover it. 00:24:58.867 [2024-07-25 14:26:28.282292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.282346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.282577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.282633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.282904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.282957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.283149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.283204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.283468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.283522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.283744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.283805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.284085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.284141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.284417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.284470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.284782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.284836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.285107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.285161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.285418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.285471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.285695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.285748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.285964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.286017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.286297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.286352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.286542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.286619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.286890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.286947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.287150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.287212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.287446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.287506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.287784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.287837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.288072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.288129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.288398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.288452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.288658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.288715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.288953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.289010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.289258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.289320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.868 [2024-07-25 14:26:28.289586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.868 [2024-07-25 14:26:28.289643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.868 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.289932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.289994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.290201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.290258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.290445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.290499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.290681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.290734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.290918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.291007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.291267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.291322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.291500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.291556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.291780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.291835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.292108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.292164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.292426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.292480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.292704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.292757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.293035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.293126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.293358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.293417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.293689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.293747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.294029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.294116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.294355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.294415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.294621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.294677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.294926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.294983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.295286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.295346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.295626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.295683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.295919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.295990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.296238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.296300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.296547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.296605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.869 qpair failed and we were unable to recover it. 00:24:58.869 [2024-07-25 14:26:28.296841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.869 [2024-07-25 14:26:28.296901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.297174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.297233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.297530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.297587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.297894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.297951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.298132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.298192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.298475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.298533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.298773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.298831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.299111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.299170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.299438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.299495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.299768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.299826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.300025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.300100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.300397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.300456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.300701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.300758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.301031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.301108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.301341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.301399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.301617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.301674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.301904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.301963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.302258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.302318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.302502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.302559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.302819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.302878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.303128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.303189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.303436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.303493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.303730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.303787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.304104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.304163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.304460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.870 [2024-07-25 14:26:28.304519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.870 qpair failed and we were unable to recover it. 00:24:58.870 [2024-07-25 14:26:28.304750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.304807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.305085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.305144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.305333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.305393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.305664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.305722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.305967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.306025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.306304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.306363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.306573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.306632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.306919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.306977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.307186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.307247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.307487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.307544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.307724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.307785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.308076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.308136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.308348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.308414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.308622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.308680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.308915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.308979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.309242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.309306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.309568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.309630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.309893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.309954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.310210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.310275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.310523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.310586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.310846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.310907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.311182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.311247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.311470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.311533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.311793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.311857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.871 qpair failed and we were unable to recover it. 00:24:58.871 [2024-07-25 14:26:28.312113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.871 [2024-07-25 14:26:28.312178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.312475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.312538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.312854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.312916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.313173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.313237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.313497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.313558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.313851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.313913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.314117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.314182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.314383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.314447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.314718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.314780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.314996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.315078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.315332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.315393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.315634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.315697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.315996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.316338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.316403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.316711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.316774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.317029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.317116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.317412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.317475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.317777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.317839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.318052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.318132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.872 qpair failed and we were unable to recover it. 00:24:58.872 [2024-07-25 14:26:28.318380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.872 [2024-07-25 14:26:28.318443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.318710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.318771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.319076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.319140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.319450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.319512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.319812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.319874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.320146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.320209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.320457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.320521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.320742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.320806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.321004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.321080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.321378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.321441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.321674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.321735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.321967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.322029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.322325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.322389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.322684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.322746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.323013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.323090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.323388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.323452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.323719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.323781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.324042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.324122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.324435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.324496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.324711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.324776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.325077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.325142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.325435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.325496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.325744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.325806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.326083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.326148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.326332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.326394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.326690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.326752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.327000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.873 [2024-07-25 14:26:28.327083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.873 qpair failed and we were unable to recover it. 00:24:58.873 [2024-07-25 14:26:28.327381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.327445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.327663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.327725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.327977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.328038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.328258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.328324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.328554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.328616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.328869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.328931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.329192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.329256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.329469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.329532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.329793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.329854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.330116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.330189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.330443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.330506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.330754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.330818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.331121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.331185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.331441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.331503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.331747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.331808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.332109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.332173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.332433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.332495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.332798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.332861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.333114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.333178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.333449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.333512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.333723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.333785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.334040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.334133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.334400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.334463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.334742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.334805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.335105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.335169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.335470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.335532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.874 qpair failed and we were unable to recover it. 00:24:58.874 [2024-07-25 14:26:28.335771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.874 [2024-07-25 14:26:28.335833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.336093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.336156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.336454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.336517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.336772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.336837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.337132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.337197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.337491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.337553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.337810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.337871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.338176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.338239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.338446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.338508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.338710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.338775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.339091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.339156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.339414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.339477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.339768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.339829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.340131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.340195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.340451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.340514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.340762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.340824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.341090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.341155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.341403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.341466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.341717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.341782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.342121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.342187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.342397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.342460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.342755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.342817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.343116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.343493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.343566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.343874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.343937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.344185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.344248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.344545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.875 [2024-07-25 14:26:28.344606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.875 qpair failed and we were unable to recover it. 00:24:58.875 [2024-07-25 14:26:28.344864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.344929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.345245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.345308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.345611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.345673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.345943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.346005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.346279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.346342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.346599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.346661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.346952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.347014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.347305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.347368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.347665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.347727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.347979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.348337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.348401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.348719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.348781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.349090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.349155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.349460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.349522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.349788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.349850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.350093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.350457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.350520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.350774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.350837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.351044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.351129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.351404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.351467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.351762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.351824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.352091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.352156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.352406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.352468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.352748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.352811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.353073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.353137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.353342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.353404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.876 [2024-07-25 14:26:28.353699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.876 [2024-07-25 14:26:28.353760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.876 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.354072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.354135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.354450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.354512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.354772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.354834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.355086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.355149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.355378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.355440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.355690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.355756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.356023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.356105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.356360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.356423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.356733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.356795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.357055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.357145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.357385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.357448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.357740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.357802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.358049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.358144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.358412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.358476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.358792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.358854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.359105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.359169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.359460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.359521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.359783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.359845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.360094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.360157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.360405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.360466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.360762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.360823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.361094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.361161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.361357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.361419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.361697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.361760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.362085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.362149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.362407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.362470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.362758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.877 [2024-07-25 14:26:28.362820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.877 qpair failed and we were unable to recover it. 00:24:58.877 [2024-07-25 14:26:28.363091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.363155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.363419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.363481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.363702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.363765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.364072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.364136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.364434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.364497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.364753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.364815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.365085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.365149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.365414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.365476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.365730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.365792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.366051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.366151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.366452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.366514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.366830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.366891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.367144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.367208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.367469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.367531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.367729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.367794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.368093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.368157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.368460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.368522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.368826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.368887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.369139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.369203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.369464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.369525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.369748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.369812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.370087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.370151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.370418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.878 [2024-07-25 14:26:28.370491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.878 qpair failed and we were unable to recover it. 00:24:58.878 [2024-07-25 14:26:28.370710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.370775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.371025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.371106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.371435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.371497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.371753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.371815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.372109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.372173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.372425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.372490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.372687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.372749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.372975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.373037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.373309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.373373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.373674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.373735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.374034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.374128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.374418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.374480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.374794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.374855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.375122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.375187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.375438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.375501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.375723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.375785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.376039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.376123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.376405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.376468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.376773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.376834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.377093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.377346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.377410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.377700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.377762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.378054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.378147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.378398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.378463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.378764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.378827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.379123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.379187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.379445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.379508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.379759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.879 [2024-07-25 14:26:28.379821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.879 qpair failed and we were unable to recover it. 00:24:58.879 [2024-07-25 14:26:28.380087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.380150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.380458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.380519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.380767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.380829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.381092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.381156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.381419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.381481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.381790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.381852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.382114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.382179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.382433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.382495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.382798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.382860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.383103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.383167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.383409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.383472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.383730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.383802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.384099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.384162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.384417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.384479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.384781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.384843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.385144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.385209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.385499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.385561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.385859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.385922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.386218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.386281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.386533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.386596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.386850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.386911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.387220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.387284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.387538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.387600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.387851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.387913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.388177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.388243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.388545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.388609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.388857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.388919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.389166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.389231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.389525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.389588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.389852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.880 [2024-07-25 14:26:28.389914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.880 qpair failed and we were unable to recover it. 00:24:58.880 [2024-07-25 14:26:28.390137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.390202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.390460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.390522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.390790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.390852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.391150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.391214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.391514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.391577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.391812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.391839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.391933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.391959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.392950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.392976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.393093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.393453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.393480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.393691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.393718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.393897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.394244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.394279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.394532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.394559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.394702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.394727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.394810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.394840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.395935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.395962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.396084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.396166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.881 qpair failed and we were unable to recover it. 00:24:58.881 [2024-07-25 14:26:28.396411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.881 [2024-07-25 14:26:28.396475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.396737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.396803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.397098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.397173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.397504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.397568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.397855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.397921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.398204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.398270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.398494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.398557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.398849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.398911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.399166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.399244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.399555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.399626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.399912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.399987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.400272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.400338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.400541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.400629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.400887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.400952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.401221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.401295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.401562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.401627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.401840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.401904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.402172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.402237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.402501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.402567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.403616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.403688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.403920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.403986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.404258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.404326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.404653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.404717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.404980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.405042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.405334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.405397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.405658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.405725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.405998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.406105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.406322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.406388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.406716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.406813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.407139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.407226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.407492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.882 [2024-07-25 14:26:28.407556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.882 qpair failed and we were unable to recover it. 00:24:58.882 [2024-07-25 14:26:28.407824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.407902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.408151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.408219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.408490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.408556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.408820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.408893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.409204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.409284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.409513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.409594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.409860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.409941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.410222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.410304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.410629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.410694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.411019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.411118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.411355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.411426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.411668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.411733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.411964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.412037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.412344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.412409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.412753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.412831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.413046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.413132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.413444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.413521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.413783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.413852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.414167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.414246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.414520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.414601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.414924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.414990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.415262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.415625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.415691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.415968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.416034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.416335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.416405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.416626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.416702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.416937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.417297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.417371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.417635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.417716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.418032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.418125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.418408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.883 [2024-07-25 14:26:28.418478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.883 qpair failed and we were unable to recover it. 00:24:58.883 [2024-07-25 14:26:28.418801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.418866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.419180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.419249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.419573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.419650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.419872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.419940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.420249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.420323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.420559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.420624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.420892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.420958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.421260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.421325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.421607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.421672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.421950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.422015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.422321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.422418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.422747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.422815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.423091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.423159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.423394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.423457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.423690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.423754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.423955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.424020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.424251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.424316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.424577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.424643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.424945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.425008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.425251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.425321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.425594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.425658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.425929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.425995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.426316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.426380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.426682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.426755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.884 qpair failed and we were unable to recover it. 00:24:58.884 [2024-07-25 14:26:28.427047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.884 [2024-07-25 14:26:28.427212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.427487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.427554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.427839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.427905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.428141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.428220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.428490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.428569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.428837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.428922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.429190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.429273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.429471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.429536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.429859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.429936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.430228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.430310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.430542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.430613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.430890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.430966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.431252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.431329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.431656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.431728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.432008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.432097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.432346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.432415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.432693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.432759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.433036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.433124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.433392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.433458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.433767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.433838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.434095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.434161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.434379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.434456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.434738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.434808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.435028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.435146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.435393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.435466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.435691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.435772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.436025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.436123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.436455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.436532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.436782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.436858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.437134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.437214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.437530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.437610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.437832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.437900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.438164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.438230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.885 [2024-07-25 14:26:28.438530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.885 [2024-07-25 14:26:28.438599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.885 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.438839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.438915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.439252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.439328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.439563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.439628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.439909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.439976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.440272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.440344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.440615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.440684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.440962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.441046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.441401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.441468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.441696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.441773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.441989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.442081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.442360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.442425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.442756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.442834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.443105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.443185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.443483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.443561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.443882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.443947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.444272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.444342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.444585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.444649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.444926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.444992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.445253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.445318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.445590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.445655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.445944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.446010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.446350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.446426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.446690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.446767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.446984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.447054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.447393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.447463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.447737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.447814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.448081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.448152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.448443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.448521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.448852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.448918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.449201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.449268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.449512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.449590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.449858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.886 [2024-07-25 14:26:28.449935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.886 qpair failed and we were unable to recover it. 00:24:58.886 [2024-07-25 14:26:28.450163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.450232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.450565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.450646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.450852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.450925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.451176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.451243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.451565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.451900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.451981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.452305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.452385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.452694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.452760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.453028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.453142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.453415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.453480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.453755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.453820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.454140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.454207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.454457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.454521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.454837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.454902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.455221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.455291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.455530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.455596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.455868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.455934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.456230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.456308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.456592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.456658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.456933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.457000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.457292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.457401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.457683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.457772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.458134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.458232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.458495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.458563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.458831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.458902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.459153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.459217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:58.887 [2024-07-25 14:26:28.459475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.887 [2024-07-25 14:26:28.459538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:58.887 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.459767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.459833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.460081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.460167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.460412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.460492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.460729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.460801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.461153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.461218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.461470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.461535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.461789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.461851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.462090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.462141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.462325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.462359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.462550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.462604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.462741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.462775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.462981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.463872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.463996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.464029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.464157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.464193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.464314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.464356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.464612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.464644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.464758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.464797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.464975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.465010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.465178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.465212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.167 qpair failed and we were unable to recover it. 00:24:59.167 [2024-07-25 14:26:28.465366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.167 [2024-07-25 14:26:28.465400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.465518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.465550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.465701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.465737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.465853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.465886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.466838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.466872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.467880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.467913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.468869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.468900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.469921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.469954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.470892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.168 [2024-07-25 14:26:28.470917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.168 qpair failed and we were unable to recover it. 00:24:59.168 [2024-07-25 14:26:28.471014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.471967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.471993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.472873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.472990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.473936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.473963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.474939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.474965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.475053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.475087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.475184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.475217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.475303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.475328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.475416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.475441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.169 [2024-07-25 14:26:28.475527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.169 [2024-07-25 14:26:28.475558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.169 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.475681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.475706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.475809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.475836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.475915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.475946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.476882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.476978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.477894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.477920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.478962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.478988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.170 qpair failed and we were unable to recover it. 00:24:59.170 [2024-07-25 14:26:28.479689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.170 [2024-07-25 14:26:28.479713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.479833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.479863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.479953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.479978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.480882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.480908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.481886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.481910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.482902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.482927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.483929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.483955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.484077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.484103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.484222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.484248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.484341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.171 [2024-07-25 14:26:28.484366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.171 qpair failed and we were unable to recover it. 00:24:59.171 [2024-07-25 14:26:28.484448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.484479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.484694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.484742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.484888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.484926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.485895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.485939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.486067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.486123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.486221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.486247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.486366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.486426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.486662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.486701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.486821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.486858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.487831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.487867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.488854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.488885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.489024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.489083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.489194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.172 [2024-07-25 14:26:28.489225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.172 qpair failed and we were unable to recover it. 00:24:59.172 [2024-07-25 14:26:28.489415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.489445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.489520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.489546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.489702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.489741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.489955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.489989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.490922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.490957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.491927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.492858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.492884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.493926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.493957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.173 [2024-07-25 14:26:28.494906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.173 [2024-07-25 14:26:28.494957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.173 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.495830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.495973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.496203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.496341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.496549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.496754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.496895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.496919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.497923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.497974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.498926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.498951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.499103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.499267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.499486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.499600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.499715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.499968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.500018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.500237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.500273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.500504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.500537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.500671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.500709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.174 [2024-07-25 14:26:28.500855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.174 [2024-07-25 14:26:28.500886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.174 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.501097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.501152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.501297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.501330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.501551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.501602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.501860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.501892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.502891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.502915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.503034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.503068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.503220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.503270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.503673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.503698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.503931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.503980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.504230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.504255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.504342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.504366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.504517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.504542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.504660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.504684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.504824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.504849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.505854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.505893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.506085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.506292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.506465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.506615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.506825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.506994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.507034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.507229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.507255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.507374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.507399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.175 qpair failed and we were unable to recover it. 00:24:59.175 [2024-07-25 14:26:28.507517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.175 [2024-07-25 14:26:28.507541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.507647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.507671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.507752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.507777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.507871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.507896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.508029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.508091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.508317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.508367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.508576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.508609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.508755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.508789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.508998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.509034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.509187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.509223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.509443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.509494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.509635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.509668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.509794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.509842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.510039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.510101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.510288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.510314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.510432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.510457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.510600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.510634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.510824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.510856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.511869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.511988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.512123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.512247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.512439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.512574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.512755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.512804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.513040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.513117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.513292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.513343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.513508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.513532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.513645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.176 [2024-07-25 14:26:28.513671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.176 qpair failed and we were unable to recover it. 00:24:59.176 [2024-07-25 14:26:28.513854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.513904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.514894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.514919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.515036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.515095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.515243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.515276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.515489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.515540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.515796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.515847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.516074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.516130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.516336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.516387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.516581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.516606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.516725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.516749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.516863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.516888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.517928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.517965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.518254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.518287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.518497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.518572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.518823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.518857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.519037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.519154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.519302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.519415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.177 [2024-07-25 14:26:28.519551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.177 qpair failed and we were unable to recover it. 00:24:59.177 [2024-07-25 14:26:28.519633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.519658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.519745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.519769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.519860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.519912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.520084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.520136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.520338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.520394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.520562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.520626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.520873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.520905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.521074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.521107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.521317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.521367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.521587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.521619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.521755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.521786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.521919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.521968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.522107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.522140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.522277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.522314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.522570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.522608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.522731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.522769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.522971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.523023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.523279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.523318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.523484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.523521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.523745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.523795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.524043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.524104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.524354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.524392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.524545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.524583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.524700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.524737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.524921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.524970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.525179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.525204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.525322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.525346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.525455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.525496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.525632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.525664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.525856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.525894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.526081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.526115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.526225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.526258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.526437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.526494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.526690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.526748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.178 qpair failed and we were unable to recover it. 00:24:59.178 [2024-07-25 14:26:28.527001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.178 [2024-07-25 14:26:28.527025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.527148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.527172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.527376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.527414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.527604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.527659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.527863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.527913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.528073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.528106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.528251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.528301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.528513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.528545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.528714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.528777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.529050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.529161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.529371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.529423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.529531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.529563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.529702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.529734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.529888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.529943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.530177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.530211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.530372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.530404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.530575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.530606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.530812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.530864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.531849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.531907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.532108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.532141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.532254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.532287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.532449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.532500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.532731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.532764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.532934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.532966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.533076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.533108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.533245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.533279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.533501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.533577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.533819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.533845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.533964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.533989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.534162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.179 [2024-07-25 14:26:28.534214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.179 qpair failed and we were unable to recover it. 00:24:59.179 [2024-07-25 14:26:28.534432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.534464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.534604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.534637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.534748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.534783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.534925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.534958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.535170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.535230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.535434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.535482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.535728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.535761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.535895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.535926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.536113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.536164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.536389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.536421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.536590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.536622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.536826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.536878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.537119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.537154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.537305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.537336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.537478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.537535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.537749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.537799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.538017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.538041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.538162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.538188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.538342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.538375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.538573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.538621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.538845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.538894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.539102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.539134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.539258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.539290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.539500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.539755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.539804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.540022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.540046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.540170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.540194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.540355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.540405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.540650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.540699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.540866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.540915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.541119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.541170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.541394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.541443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.541644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.541693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.541906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.541954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.180 qpair failed and we were unable to recover it. 00:24:59.180 [2024-07-25 14:26:28.542162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.180 [2024-07-25 14:26:28.542214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.542378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.542427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.542714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.542777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.543056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.543119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.543333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.543383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.543526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.543559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.543727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.543777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.544008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.544040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.544250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.544301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.544490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.544539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.544745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.544783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.544924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.544957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.545199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.545249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.545460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.545508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.545704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.545752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.546019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.546094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.546372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.546405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.546551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.546584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.546777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.546828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.547083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.547117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.547255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.547287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.547437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.547492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.547745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.547797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.548025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.548049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.548142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.548169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.548352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.548402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.548611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.548679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.548929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.548954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.549093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.549118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.549284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.549335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.549532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.549556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.549670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.549695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.549810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.549860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.550082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.550132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.550300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.181 [2024-07-25 14:26:28.550351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.181 qpair failed and we were unable to recover it. 00:24:59.181 [2024-07-25 14:26:28.550574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.550599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.550741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.550766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.550894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.550957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.551199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.551252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.551431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.551481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.551638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.551688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.551885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.551935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.552142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.552193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.552375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.552400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.552542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.552566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.552777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.552828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.553037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.553101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.553369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.553433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.553687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.553750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.554004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.554099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.554349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.554422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.554734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.554785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.554982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.555033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.555288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.555339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.555536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.555585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.555792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.555817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.555903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.555927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.556087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.556143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.556356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.556408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.556628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.556680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.556882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.556934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.557190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.557243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.557416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.557468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.557684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.557738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.558003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.182 [2024-07-25 14:26:28.558057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.182 qpair failed and we were unable to recover it. 00:24:59.182 [2024-07-25 14:26:28.558323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.558377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.558558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.558609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.558777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.558845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.559089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.559116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.559207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.559232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.559375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.559400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.559546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.559600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.559781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.559834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.560088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.560145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.560325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.560350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.560492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.560516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.560708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.560760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.560988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.561041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.561256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.561307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.561544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.561594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.561847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.561897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.562129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.562185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.562402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.562456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.562726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.562779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.562965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.563019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.563329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.563410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.563664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.563690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.563812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.563837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.564041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.564140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.564372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.564426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.564685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.564738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.565009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.565079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.565281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.565337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.565563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.565616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.565873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.565927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.566164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.566190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.566312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.566337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.566520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.566573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.566790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.566843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.183 [2024-07-25 14:26:28.567050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.183 [2024-07-25 14:26:28.567116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.183 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.567336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.567389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.567547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.567601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.567854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.567906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.568899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.568923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.569035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.569065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.569177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.569233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.569458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.569510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.569742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.569796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.570077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.570131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.570325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.570378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.570647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.570701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.570918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.570970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.571196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.571250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.571475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.571529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.571695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.571748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.571964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.572017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.572269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.572323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.572541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.572594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.572817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.572871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.573137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.573193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.573409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.573462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.573692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.573717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.573804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.573829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.573923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.573948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.574039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.574093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.574320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.574393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.574668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.574729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.574971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.575021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.575252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.184 [2024-07-25 14:26:28.575302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.184 qpair failed and we were unable to recover it. 00:24:59.184 [2024-07-25 14:26:28.575520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.575570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.575761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.575786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.575891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.575915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.576004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.576029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.576209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.576263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.576486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.576511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.576615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.576640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.576789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.576842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.577049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.577125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.577328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.577379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.577590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.577660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.577891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.577945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.578128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.578184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.578440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.578493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.578715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.578768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.579025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.579115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.579361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.579411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.579620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.579693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.579973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.580029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.580266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.580324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.580579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.580632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.580823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.580876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.581105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.581159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.581341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.581394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.581623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.581684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.581912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.581966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.582212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.582266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.582534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.582587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.582840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.582893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.583156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.583210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.583431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.583484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.583689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.583742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.584023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.584113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.584381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.584435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.584678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.185 [2024-07-25 14:26:28.584735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.185 qpair failed and we were unable to recover it. 00:24:59.185 [2024-07-25 14:26:28.584992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.585041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.585249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.585299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.585573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.585630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.585848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.585906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.586104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.586162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.586423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.586472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.586681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.586763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.587036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.587109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.587394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.587451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.587693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.587750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.588025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.588120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.588384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.588441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.588717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.588774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.589045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.589127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.589364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.589422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.589698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.589755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.589966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.590032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.590259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.590320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.590502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.590559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.590788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.590845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.591096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.591156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.591386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.591443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.591664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.591721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.591918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.591977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.592230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.592289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.592496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.592553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.592770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.592827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.593056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.593125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.593342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.593400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.593641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.593698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.593981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.594040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.594295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.594589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.594647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.594913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.594970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.595257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.186 [2024-07-25 14:26:28.595315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.186 qpair failed and we were unable to recover it. 00:24:59.186 [2024-07-25 14:26:28.595525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.595582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.595780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.595838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.596039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.596112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.596351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.596411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.596668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.596726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.596998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.597055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.597338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.597395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.597723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.597780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.597981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.598041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.598347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.598405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.598631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.598688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.598911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.598968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.599211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.599270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.599469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.599527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.599772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.599829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.600092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.600170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.600451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.600508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.600687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.600765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.600979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.601040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.601371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.601434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.601688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.602021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.602090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.602349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.602407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.602647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.602705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.603005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.603079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.603354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.603416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.603626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.603687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.603977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.604039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.604292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.604357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.604622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.187 [2024-07-25 14:26:28.604671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.187 qpair failed and we were unable to recover it. 00:24:59.187 [2024-07-25 14:26:28.604839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.604906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.605178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.605243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.605515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.605564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.605732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.605781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.605991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.606078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.606375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.606436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.606748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.606810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.607079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.607143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.607341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.607403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.607649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.607711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.607970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.608032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.608306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.608369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.608664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.608726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.608992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.609054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.609338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.609400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.609736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.609798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.609998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.610078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.610346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.610408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.610682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.610744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.610993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.611080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.611403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.611466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.611757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.611819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.612090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.612154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.612407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.612469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.612753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.612815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.613115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.613179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.613442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.613504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.613755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.613817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.614014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.614086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.614381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.614444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.614747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.614810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.615125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.615176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.615380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.615429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.615685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.615748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.616015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.188 [2024-07-25 14:26:28.616089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.188 qpair failed and we were unable to recover it. 00:24:59.188 [2024-07-25 14:26:28.616343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.616405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.616664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.616714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.616890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.616960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.617246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.617311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.617566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.617628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.617933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.617995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.618305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.618368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.618673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.618735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.619022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.619102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.619367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.619428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.619729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.619791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.620040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.620128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.620419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.620480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.620769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.620831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.621119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.621183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.621383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.621446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.621698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.621759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.622041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.622118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.622425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.622488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.622701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.622766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.623030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.623105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.623379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.623429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.623593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.623643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.623913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.623978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.624221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.624284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.624552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.624616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.624912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.624974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.625311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.625375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.625621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.625683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.625938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.626000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.626278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.626530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.626592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.626890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.626953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.627226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.627289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.627545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.189 [2024-07-25 14:26:28.627607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.189 qpair failed and we were unable to recover it. 00:24:59.189 [2024-07-25 14:26:28.627838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.627902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.628191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.628255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.628558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.628620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.628914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.628976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.629304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.629367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.629660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.629722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.629973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.630034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.630317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.630367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.630568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.630645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.630860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.630921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.631148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.631215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.631430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.631492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.631710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.631772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.632028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.632108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.632404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.632466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.632714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.632776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.633090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.633154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.633399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.633463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.633751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.633813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.634116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.634181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.634448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.634510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.634780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.634843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.635151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.635214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.635464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.635535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.635795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.635858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.636027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.636102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.636349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.636412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.636658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.636719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.636937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.637001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.637337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.637401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.637700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.637763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.638030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.638111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.638347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.638410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.638708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.638771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.639031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.190 [2024-07-25 14:26:28.639110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.190 qpair failed and we were unable to recover it. 00:24:59.190 [2024-07-25 14:26:28.639406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.639469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.639765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.639828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.640123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.640186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.640488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.640550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.640856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.640918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.641160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.641224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.641502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.641562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.641855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.641929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.642241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.642299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.642553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.642621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.642846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.642902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.643164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.643220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.643418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.643485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.643765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.643830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.644051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.644136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.644360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.644410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.644645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.644691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.644910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.644976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.645281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.645331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.645543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.645608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.645865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.645928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.646195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.646259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.646584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.646649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.646918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.646981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.647295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.647362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.647652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.647716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.647966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.648043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.648389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.648452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.648696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.648770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.648988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.649051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.649333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.649397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.649652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.649727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.649983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.650046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.650359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.650429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.650670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.191 [2024-07-25 14:26:28.650733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.191 qpair failed and we were unable to recover it. 00:24:59.191 [2024-07-25 14:26:28.650939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.651001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.651260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.651337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.651557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.651624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.651883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.651949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.652265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.652332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.652595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.652658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.652920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.652984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.653279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.653345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.653611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.653677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.653948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.654011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.654295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.654362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.654583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.654647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.654899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.654962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.655245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.655312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.655543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.655606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.655869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.655934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.656267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.656334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.656585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.656660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.656889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.656953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.657186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.657251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.657575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.657641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.657946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.658022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.658338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.658403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.658713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.658777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.659094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.659160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.659425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.659489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.659765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.659831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.660130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.660195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.660456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.660531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.192 [2024-07-25 14:26:28.660772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.192 [2024-07-25 14:26:28.660834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.192 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.661128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.661199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.661447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.661512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.661734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.661796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.662012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.662096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.662397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.662460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.662736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.662805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.663088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.663153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.663414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.663477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.663700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.663765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.663978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.664040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.664351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.664416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.664656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.664721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.665024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.665111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.665429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.665494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.665760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.665822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.666137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.666206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.666466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.666530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.666763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.666828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.667081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.667147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.667382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.667453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.667747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.667813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.668082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.668146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.668412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.668477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.668718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.668782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.669020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.669129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.669414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.669480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.669749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.669815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.670128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.670195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.670440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.670503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.670753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.670819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.671037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.671120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.671416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.671482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.671743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.671806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.672056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.672137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.193 [2024-07-25 14:26:28.672359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.193 [2024-07-25 14:26:28.672424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.193 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.672673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.672737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.673045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.673127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.673412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.673478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.673752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.673819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.674083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.674151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.674447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.674510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.674773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.674839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.675100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.675166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.675421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.675499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.675820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.675885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.676099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.676167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.676387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.676453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.676705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.676767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.677102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.677171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.677425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.677487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.677744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.677810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.678113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.678180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.678444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.678507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.678761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.678828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.679122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.679189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.679462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.679528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.679797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.679860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.680133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.680209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.680459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.680524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.680765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.680828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.681105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.681180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.681449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.681513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.681746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.681824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.682143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.682209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.682434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.682497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.682718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.682784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.683006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.683102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.683428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.683507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.683767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.194 [2024-07-25 14:26:28.683830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.194 qpair failed and we were unable to recover it. 00:24:59.194 [2024-07-25 14:26:28.684037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.684123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.684353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.684420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.684636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.684699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.684948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.685010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.685293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.685360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.685594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.685657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.685915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.685981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.686307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.686372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.686604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.686682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.686945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.687008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.687258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.687323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.687706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.687772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.688046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.688130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.688447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.688512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.688820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.688883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.689211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.689279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.689534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.689609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.689909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.689974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.690233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.690298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.690595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.690660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.690883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.690946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.691211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.691292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.691543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.691608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.691809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.691872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.692127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.692220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.692519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.692848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.692914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.693162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.693228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.693486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.693549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.693778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.693843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.694090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.694167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.694452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.694522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.694802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.694866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.695131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.695197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.695417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.195 [2024-07-25 14:26:28.695482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.195 qpair failed and we were unable to recover it. 00:24:59.195 [2024-07-25 14:26:28.695710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.695773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.696074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.696148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.696438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.696501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.696724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.696787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.697053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.697157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.697385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.697447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.697691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.697765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.697974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.698036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.698319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.698382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.698670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.698735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.698985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.699047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.699367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.699433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.699703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.699766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.699970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.700046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.700306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.700371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.700666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.700741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.701046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.701456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.701532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.701874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.701936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.702206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.702282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.702560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.702626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.702883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.702946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.703294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.703361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.703631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.703695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.703964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.704029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.704339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.704402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.704651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.704717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.705007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.705098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.705364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.705440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.705733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.705796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.706081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.706162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.706437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.706501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.706754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.706831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.707112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.707180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.707384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.196 [2024-07-25 14:26:28.707447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.196 qpair failed and we were unable to recover it. 00:24:59.196 [2024-07-25 14:26:28.707653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.707727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.708039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.708125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.708423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.708493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.708784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.708847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.709096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.709162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.709442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.709507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.709755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.709818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.710074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.710142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.710366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.710428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.710705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.710779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.711018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.711104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.711420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.711499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.711733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.711799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.712094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.712160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.712444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.712509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.712777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.712840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.713119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.713193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.713463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.713526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.713789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.713854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.714129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.714195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.714498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.714568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.714815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.714878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.715172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.715248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.715510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.715576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.715871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.715933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.716178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.716248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.716528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.716592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.716812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.716886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.717163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.717228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.717503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.197 [2024-07-25 14:26:28.717568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.197 qpair failed and we were unable to recover it. 00:24:59.197 [2024-07-25 14:26:28.717840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.717905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.718161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.718226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.718498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.718573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.718881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.718943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.719211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.719290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.719559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.719624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.719933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.719999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.720308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.720373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.720584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.720646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.720938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.721002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.721280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.721345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.721602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.721668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.721993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.722057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.722396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.722462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.722746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.722810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.723086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.723165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.723440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.723503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.723803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.723866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.724192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.724258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.724510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.724584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.724875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.724940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.725204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.725268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.725521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.725587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.725792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.725854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.726152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.726218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.726443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.726508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.726759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.726829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.727120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.727185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.727449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.727510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.727742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.727806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.728054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.728140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.728396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.728462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.728699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.728761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.729003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.729119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.198 qpair failed and we were unable to recover it. 00:24:59.198 [2024-07-25 14:26:28.729264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.198 [2024-07-25 14:26:28.729298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.729446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.729479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.729605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.729639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.729760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.729794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.729908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.729940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.730128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.730176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.730384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.730418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.730541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.730575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.730746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.730779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.731140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.731293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.731440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.731594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.731846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.731968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.732002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.732112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.732147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.732294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.732340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.732571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.732635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.732941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.732974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.733115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.733151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.733347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.733392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.733574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.733639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.733929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.733993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.734250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.734555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.734701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.734733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.734959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.734993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.735111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.735145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.735291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.735332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.735556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.735619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.735871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.735936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.736226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.736260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.736414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.736476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.736717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.736750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.736860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.736892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.199 [2024-07-25 14:26:28.737035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.199 [2024-07-25 14:26:28.737110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.199 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.737269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.737314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.737585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.737619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.737817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.737890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.738089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.738140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.738355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.738389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.738506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.738539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.738750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.738817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.739031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.739076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.739204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.739238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.739461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.739525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.739785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.739847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.740123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.740186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.740441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.740528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.740812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.740900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.741239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.741301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.741558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.741637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.741933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.741976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.742142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.742185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.742325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.742368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.742494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.742534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.742702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.742746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.742886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.742929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.743097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.743138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.743304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.743346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.743517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.743559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.743770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.743805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.743909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.743942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.744888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.744917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.745031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.745078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.200 [2024-07-25 14:26:28.745219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.200 [2024-07-25 14:26:28.745249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.200 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.745385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.745414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.745517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.745547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.745642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.745672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.745830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.745859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.745990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.746835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.746889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.747968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.747998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.748877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.748908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.749843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.749873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.750016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.750046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.750154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.750184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.750322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.750352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.201 [2024-07-25 14:26:28.750459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.201 [2024-07-25 14:26:28.750489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.201 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.750596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.750628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.750735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.750769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.750874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.750904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.751963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.751992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.752952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.752982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.753879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.753986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.754127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.754268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.754430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.754569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.202 qpair failed and we were unable to recover it. 00:24:59.202 [2024-07-25 14:26:28.754737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.202 [2024-07-25 14:26:28.754766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.754864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.754894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.754998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.755902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.755931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.756907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.756940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.757877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.757907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.758894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.758925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.759927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.203 [2024-07-25 14:26:28.759957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.203 qpair failed and we were unable to recover it. 00:24:59.203 [2024-07-25 14:26:28.760071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.760943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.760972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.761962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.761991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.762888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.762980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.763915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.763945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.764865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.764899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.765017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.765047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.204 [2024-07-25 14:26:28.765176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.204 [2024-07-25 14:26:28.765207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.204 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.765307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.765337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.765451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.765481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.765576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.765606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.765737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.765767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.765865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.765895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.766967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.766997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.767900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.767929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.768932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.768967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.769828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.769858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.770021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.770050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.770145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.770174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.770332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.770361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.205 qpair failed and we were unable to recover it. 00:24:59.205 [2024-07-25 14:26:28.770466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.205 [2024-07-25 14:26:28.770496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.770661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.770691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.770796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.770826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.770919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.770948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.771858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.771887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.772827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.772990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.773938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.773969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.774117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.774280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.774475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.774664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.774836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.774999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.775925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.775955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.776056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.776092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.776225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.776255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.206 qpair failed and we were unable to recover it. 00:24:59.206 [2024-07-25 14:26:28.776343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.206 [2024-07-25 14:26:28.776373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.776459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.776489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.776623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.776653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.776784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.776813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.776949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.776979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.777897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.777927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.778882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.778911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.779831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.779860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.780815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.780976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.781005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.781162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.781193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.207 [2024-07-25 14:26:28.781330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.207 [2024-07-25 14:26:28.781360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.207 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.781485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.781515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.781644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.781674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.781835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.781864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.782878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.782908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.783947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.784943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.784973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.785909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.785939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.786905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.786935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.208 [2024-07-25 14:26:28.787847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.208 qpair failed and we were unable to recover it. 00:24:59.208 [2024-07-25 14:26:28.787976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.788955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.788984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.789850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.789880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.790883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.790914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.791879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.791909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.792860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.792889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.793052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.793092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.793222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.793251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.793412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.793463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.793690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.793743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.793944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.793976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.794108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.794141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.794303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.794333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.794533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.794584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.794802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.794835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.209 [2024-07-25 14:26:28.794950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.209 [2024-07-25 14:26:28.794984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.209 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.795211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.795257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.795459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.795506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.795757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.795790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.795969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.796029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.796254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.796299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.796518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.796551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.796688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.796721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.796862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.796894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.797098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.797132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.797250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.797283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.797457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.797490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.797632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.797665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.797803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.797850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.798081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.798144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.798286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.798330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.798580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.798629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.210 [2024-07-25 14:26:28.798834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.210 [2024-07-25 14:26:28.798864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.210 qpair failed and we were unable to recover it. 00:24:59.509 [2024-07-25 14:26:28.799005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.799035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.799169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.799200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.799295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.799326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.799574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.799619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.799789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.799837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.800941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.800974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.801200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.801236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.801396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.801426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.801577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.801610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.801728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.510 [2024-07-25 14:26:28.801761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.510 qpair failed and we were unable to recover it. 00:24:59.510 [2024-07-25 14:26:28.801877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-07-25 14:26:28.801911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-07-25 14:26:28.802139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-07-25 14:26:28.802173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-07-25 14:26:28.802346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.511 [2024-07-25 14:26:28.802379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.511 qpair failed and we were unable to recover it. 00:24:59.511 [2024-07-25 14:26:28.802520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.802554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.802777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.802824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.803046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.803150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.803327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.803376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.803574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.803622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.803819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.803866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.804054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.804111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.804282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.804343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.804574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.804621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.804852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.804901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.805119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.805164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.805348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.805380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.805487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.805522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.805696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.805760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.805930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.805981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.806191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.806237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.806399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.806448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.806649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.806696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.806929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.806977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.807172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.807218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.807389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.807440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.807643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.807687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.807889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.807938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.808139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.808184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.808365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.808398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.808502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.808536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.808678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.808711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.808991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.809038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.809254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.809298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.809545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.809577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.809748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.809803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.809999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.810149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.810298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.810522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.810718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.810923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.810969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.811148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.811197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.811416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.811448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.811547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.811580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.811692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.811724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.811826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.811860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.814283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.814357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.814624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.512 [2024-07-25 14:26:28.814660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.512 qpair failed and we were unable to recover it. 00:24:59.512 [2024-07-25 14:26:28.814770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.814804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.814975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.815153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.815332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.815504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.815668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.815840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.815870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.816876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.816906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.817851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.817881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.818899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.818929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.819853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.819884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.820878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.820984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.821958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.821988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.822125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.822157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.822329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.822359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.822491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.822521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.822647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.822677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.513 [2024-07-25 14:26:28.822808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.513 [2024-07-25 14:26:28.822837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.513 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.822979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.823858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.823988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.824955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.824984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.825871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.825901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.826909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.826939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.827922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.828120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.828169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.828363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.828410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.828639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.828686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.828839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.828896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.829088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.829138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.829327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.829372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.829568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.829614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.829767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.829812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.829968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.830014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.830223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.830271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.830416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.830464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.830647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.830679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.830799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.830831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.831007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.831056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.831270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.831324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.831470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.831504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.831709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.831742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.831921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.831954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.832168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.832202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.832342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.832375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.832639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.832692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.832891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.832938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.833144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.833177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.514 qpair failed and we were unable to recover it. 00:24:59.514 [2024-07-25 14:26:28.833317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.514 [2024-07-25 14:26:28.833349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.833549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.833582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.833691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.833723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.833852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.833898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.834054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.834110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.834268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.834315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.834455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.834509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.834654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.834692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.834864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.834911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.835109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.835157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.835359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.835406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.835632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.835679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.835843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.835889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.836080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.836128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.836315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.836361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.836554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.836602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.836829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.836876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.837080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.837127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.837326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.837372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.837574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.837620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.837808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.837854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.838017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.838073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.838305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.838351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.838581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.838614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.838738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.838770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.839947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.839993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.840166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.840216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.840413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.840460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.840615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.840664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.840815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.840870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.841035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.841095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.841284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.841318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.841455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.841487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.841630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.841662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.515 qpair failed and we were unable to recover it. 00:24:59.515 [2024-07-25 14:26:28.841847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.515 [2024-07-25 14:26:28.841893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.842089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.842137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.842305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.842352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.842522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.842568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.842753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.842799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.842957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.843004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.843231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.843264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.843378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.843411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.843609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.843641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.843792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.843841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.516 qpair failed and we were unable to recover it. 00:24:59.516 [2024-07-25 14:26:28.844079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.516 [2024-07-25 14:26:28.844126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.844282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.844328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.844542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.844592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.844817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.844863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.845070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.845118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.845306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.845357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.845511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.845564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.845811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.845858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.846095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.846144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.846354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.846400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.846633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.846680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.846871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.846917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.847116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.847163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.847357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.847390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.847507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.847539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.847685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.847718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.847854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.847886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.848106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.848154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.848387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.848434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.848661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.848707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.848913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.848959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.849165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.849198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.849308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.849342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.849482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.849528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.849691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.849737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.849931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.849979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.850218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.850271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.850486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.850519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.850626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.850658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.850759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.850792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.850937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.850969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.851192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.851243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.851414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.851464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.851670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.851722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.851970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.852002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.852150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.852183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.852390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.852436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.852681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.852731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.852960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.852992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.853147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.853204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.853404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.853455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.853682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.853714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.853851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.853883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.854030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.854093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.854344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.854394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.517 qpair failed and we were unable to recover it. 00:24:59.517 [2024-07-25 14:26:28.854642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.517 [2024-07-25 14:26:28.854692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.854942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.854974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.855119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.855152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.855314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.855361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.855543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.855592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.855834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.855883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.856098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.856149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.856322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.856373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.856573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.856630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.856850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.856899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.857083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.857135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.857351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.857401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.857618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.857650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.857781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.857812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.857943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.857975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.858114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.858147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.858405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.858455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.858619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.858671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.858854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.858904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.859146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.859196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.859397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.859447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.859648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.859698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.859949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.859999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.860209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.860261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.860513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.860559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.860699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.860764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.860973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.861023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.861252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.861302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.861502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.861552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.861750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.861800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.862011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.862074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.862320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.862370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.862529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.862579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.862786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.862833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.863035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.863092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.863260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.863319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.863541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.863591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.863804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.863854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.864106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.864154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.864414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.864447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.864587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.864619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.864733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.864766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.864886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.864918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.865143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.865176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.865288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.865320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.865494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.865540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.865707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.865753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.866016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.866049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.866261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.866311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.866562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.866612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.866824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.866874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.867119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.867169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.867422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.867468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.867713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.867763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.867967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.518 [2024-07-25 14:26:28.868017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.518 qpair failed and we were unable to recover it. 00:24:59.518 [2024-07-25 14:26:28.868274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.868329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.868523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.868555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.868670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.868702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.868902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.868956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.869178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.869233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.869432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.869481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.869682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.869733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.869944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.870002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.870252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.870300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.870469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.870500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.870645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.870677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.870886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.870918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.871092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.871125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.871326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.871358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.871529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.871561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.871824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.871872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.872117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.872150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.872294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.872354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.872520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.872585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.872855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.872904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.873110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.873161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.873347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.873396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.873646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.873700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.873932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.873979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.874182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.874230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.874480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.874512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.519 qpair failed and we were unable to recover it. 00:24:59.519 [2024-07-25 14:26:28.874669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.519 [2024-07-25 14:26:28.874701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.874878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.874932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.875155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.875209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.875450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.875483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.875626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.875659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.875808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.875840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.876083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.876138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.876326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.876380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.876587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.876640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.876874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.876927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.877158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.877213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.877478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.877531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.877786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.877839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.878054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.878118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.878301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.878354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.878609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.878661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.878937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.878991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.879218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.879251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.879398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.879430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.879574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.879629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.879844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.879876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.880008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.880040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.880171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.880204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.880433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.880487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.880659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.880691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.880840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.880891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.881072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.881126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.881295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.881348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.881526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.881579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.881791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.881844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.882071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.882126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.882386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.882439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.882677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.882710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.882849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.882881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.883024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.883056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.883234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.883290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.883536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.883584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.883828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.883881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.884112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.884167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.884426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.884479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.884682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.884734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.884972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.885005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.885151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.885184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.885333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.885366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.885583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.520 [2024-07-25 14:26:28.885615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.520 qpair failed and we were unable to recover it. 00:24:59.520 [2024-07-25 14:26:28.885762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.885794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.886018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.886082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.886334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.886387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.886629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.886682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.886943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.887006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.887250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.887299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.887420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.887453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.887637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.887689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.887903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.887955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.888098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.888133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.888342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.888395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.888616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.888669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.888849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.888901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.889133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.889186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.889370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.889422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.889661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.889693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.889817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.889849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.890076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.890130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.890357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.890411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.890638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.890700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.890820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.890852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.891930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.891992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.892114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.892147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.892254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.892286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.892411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.892444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.892661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.892707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.892882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.892920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.893070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.893104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.893292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.893345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.893569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.893622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.893886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.894151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.894206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.894430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.894486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.894669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.894721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.894909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.894961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.895207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.895254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.895414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.895460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.895660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.895729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.895950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.896003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.896278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.896332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.896570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.896624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.896840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.896892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.897101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.897135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.897276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.897308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.897452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.897484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.897599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.897632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.521 qpair failed and we were unable to recover it. 00:24:59.521 [2024-07-25 14:26:28.897801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.521 [2024-07-25 14:26:28.897833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.898097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.898242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.898399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.898555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.898810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.898988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.899248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.899302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.899477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.899531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.899790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.899822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.899932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.899964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.900862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.900894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.901012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.901044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.901265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.901319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.901572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.901625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.901877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.901930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.902173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.902222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.902446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.902499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.902670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.902723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.902953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.903006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.903198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.903253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.903489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.903535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.903690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.903738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.903989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.904036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.904246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.904314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.904522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.904575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.904786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.904838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.905070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.905123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.905345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.905399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.905659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.905712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.905889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.905942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.906167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.906222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.906489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.906542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.906735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.906789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.907015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.907079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.907344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.907397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.907709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.907762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.907954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.908007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.908257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.908290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.908398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.908431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.908627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.908679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.908849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.908902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.909090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.909146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.909372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.909434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.909678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.909725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.909913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.522 [2024-07-25 14:26:28.909960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.522 qpair failed and we were unable to recover it. 00:24:59.522 [2024-07-25 14:26:28.910140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.910195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.910362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.910418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.910562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.910594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.910702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.910735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.910868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.910922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.911116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.911170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.911442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.911495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.911666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.911719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.911970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.912023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.912277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.912332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.912528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.912581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.912789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.912843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.913025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.913089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.913330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.913395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.913577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.913610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.913829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.913861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.914009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.914041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.914170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.914203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.914422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.914474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.914750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.914796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.914973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.915005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.915165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.915198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.915336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.915368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.915550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.915603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.915827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.915887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.916110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.916165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.916404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.916457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.916658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.916711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.523 qpair failed and we were unable to recover it. 00:24:59.523 [2024-07-25 14:26:28.916899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.523 [2024-07-25 14:26:28.916951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.917173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.917227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.917467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.917514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.917677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.917743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.917956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.917988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.918104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.918137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.918245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.918277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.918420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.918453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.918665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.918719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.918992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.919045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.919285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.919338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.919568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.919621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.919811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.919868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.920087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.920142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.920390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.920436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.920611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.920682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.920863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.920917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.921096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.921150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.921320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.921373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.921563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.921816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.921873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.922091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.922149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.922332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.922389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.922576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.922644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.922833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.922890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.923127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.923180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.923354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.923408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.923666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.923719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.923897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.924144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.924199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.924394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.924447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.924639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.924692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.924935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.924982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.925140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.925188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.925445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.925492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.925672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.925725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.925927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.925984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.926282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.926365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.926599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.926663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.926892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.926956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.927238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.927296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.927570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.927619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.927823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.927878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.928088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.928173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.928447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.928508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.928788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.928853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.929089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.929146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.929370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.929424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.524 [2024-07-25 14:26:28.929618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.524 [2024-07-25 14:26:28.929675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.524 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.929892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.929947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.930205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.930268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.930533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.930590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.930817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.930876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.931097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.931160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.931434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.931493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.931710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.931783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.932048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.932384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.932444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.932744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.932805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.933048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.933117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.933358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.933420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.933664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.933721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.934003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.934082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.934324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.934383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.934685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.934746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.934979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.935037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.935342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.935417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.935640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.935701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.935945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.936018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.936298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.936361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.936548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.936607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.936852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.936901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.937159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.937219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.937437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.937503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.937714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.937773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.937999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.938072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.938331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.938392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.938645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.938733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.938960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.939020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.939237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.939302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.939536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.939593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.939789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.939846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.940106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.940155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.940345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.940392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.940546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.940592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.940750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.940797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.941010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.941081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.941355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.941412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.941622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.941679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.941859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.941916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.942122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.942181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.942429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.942487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.942756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.942812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.943045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.943116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.943310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.943366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.943595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.943651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.943837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.943897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.944097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.944156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.944347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.944405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.944642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.944702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.944932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.944989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.945243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.945302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.525 [2024-07-25 14:26:28.945542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.525 [2024-07-25 14:26:28.945589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.525 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.945749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.945795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.945984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.946051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.946313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.946372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.946574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.946630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.946901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.946958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.947179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.947239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.947466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.947523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.947748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.947805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.948013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.948090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.948333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.948390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.948628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.948675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.948822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.948868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.949100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.949161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.949354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.949646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.949703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.949980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.950027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.950215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.950273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.950508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.950565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.950798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.950858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.951180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.951258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.951540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.951597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.951850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.951897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.952084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.952158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.952392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.952448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.952703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.952760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.953003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.953075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.953315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.953372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.953609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.953667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.953881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.953947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.954153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.954213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.954461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.954519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.954769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.954826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.955070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.955131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.955413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.955488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.955723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.955769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.955937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.956010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.956104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222a230 (9): Bad file descriptor 00:24:59.526 [2024-07-25 14:26:28.956447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.956536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.956800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.956852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.957017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.957101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.957380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.957429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.957623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.957690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.957973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.958082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.958372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.958432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.958684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.958730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.958908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.958966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.959184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.959243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.959500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.959576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.959853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.959910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.960141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.960200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.960446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.960503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.960712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.960787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.961022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.961091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.961325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.961401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.961617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.961693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.961940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.961997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.962258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.526 [2024-07-25 14:26:28.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.526 qpair failed and we were unable to recover it. 00:24:59.526 [2024-07-25 14:26:28.962640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.962717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.962917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.962974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.963247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.963326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.963584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.963661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.963875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.964143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.964209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.964475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.964550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.964765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.964840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.965078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.965137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.965447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.965522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.965759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.965834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.966031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.966111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.966352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.966408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.966581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.966649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.966925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.967001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.967223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.967300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.967578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.967637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.967864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.967939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.968230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.968306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.968550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.968627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.968861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.968918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.969159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.969235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.969506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.969582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.969884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.969958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.970181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.970259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.970514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.970590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.970842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.970900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.971136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.971213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.971443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.971519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.971758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.971834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.972040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.972114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.972311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.972387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.972625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.972700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.972989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.973047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.527 [2024-07-25 14:26:28.973337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.527 [2024-07-25 14:26:28.973400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.527 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.973609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.973686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.973876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.973935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.974188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.974266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.974478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.974554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.974846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.974922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.975143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.975219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.975426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.975502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.975802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.975884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.976177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.976253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.976520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.976598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.976779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.976835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.977102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.977161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.977401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.977476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.977787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.977863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.978094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.978154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.978465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.978539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.978788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.978862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.979131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.979211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.979474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.979550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.979758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.979835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.980087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.980146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.980455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.980531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.980785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.980861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.981094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.981182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.981489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.981566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.981773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.981849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.982120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.982178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.982453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.982528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.982727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.982805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.983082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.983140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.983399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.983474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.983701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.983777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.984000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.984086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.984306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.984382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.984608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.984684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.984929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.984986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.985226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.985303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.985571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.985888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.986181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.986261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.986484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.986559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.986819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.986894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.987198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.987275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.987485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.987561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.987810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.987885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.988125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.988199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.988465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.988540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.988815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.988873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.989107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.989187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.989410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.989485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.989789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.989865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.990153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.990232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.528 [2024-07-25 14:26:28.990503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.528 [2024-07-25 14:26:28.990579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.528 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.990815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.990873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.991069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.991127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.991381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.991455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.991719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.991796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.991969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.992026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.992303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.992380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.992650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.992727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.992964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.993022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.993346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.993422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.993678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.993753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.994023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.994096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.994363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.994438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.994656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.994731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.994933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.994991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.995272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.995350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.995620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.995695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.995900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.995958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.996212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.996290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.996582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.996640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.996857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.996923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.997190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.997266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.997537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.997612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.997840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.997899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.998150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.998227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.998456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.998531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.998762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.998837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.999030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.999100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.999372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.999429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.999671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:28.999746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:28.999981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.000038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.000365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.000442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.000719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.000794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.001043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.001117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.001380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.001457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.001710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.001786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.002027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.002100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.002323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.002399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.002700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.002775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.003030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.003101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.003320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.003401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.003674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.003734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.003980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.004040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.004306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.004382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.004586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.004662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.004884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.004959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.005190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.005267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.005532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.005616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.005887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.005961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.006249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.006327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.529 [2024-07-25 14:26:29.006555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.529 [2024-07-25 14:26:29.006632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.529 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.006825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.006884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.007182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.007259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.007518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.007593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.007822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.007879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.008108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.008167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.008442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.008518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.008776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.008852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.009189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.009280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.009513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.009589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.009859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.009917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.010227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.010304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.010512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.010586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.010859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.010916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.011152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.011231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.011489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.011564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.011848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.011905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.012136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.012216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.012466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.012542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.012735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.012792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.012990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.013047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.013323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.013399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.013673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.013749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.013983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.014042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.014263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.014339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.014636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.014713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.014951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.015008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.015339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.015423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.015679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.015756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.015960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.016020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.016341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.016419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.016727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.016803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.017049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.017134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.017361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.017436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.017719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.017796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.018036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.018114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.018414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.018491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.018793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.018869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.019157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.019217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.019438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.019513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.019754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.019831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.530 qpair failed and we were unable to recover it. 00:24:59.530 [2024-07-25 14:26:29.020032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.530 [2024-07-25 14:26:29.020101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.020360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.020434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.020738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.020813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.021092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.021150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.021406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.021481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.021739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.021814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.022099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.022158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.022468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.022545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.022789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.022846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.023056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.023140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.023381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.023439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.023698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.023772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.023968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.024026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.024266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.024350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.024614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.024690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.024964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.025022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.025307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.025383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.025693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.025768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.026008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.026080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.026308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.026386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.026690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.026766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.027012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.027085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.027339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.027413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.027639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.027714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.027957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.028023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.028299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.028376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.028619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.028679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.028949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.029006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.029291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.029367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.029659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.029735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.029957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.030013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.030337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.030435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.030774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.030843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.031045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.031171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.031502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.031569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.031792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.031856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.032119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.032194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.032454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.032523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.032793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.032866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.033148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.033416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.033474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.033730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.033796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.034048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.034146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.034394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.034454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.034751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.034818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.035105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.531 [2024-07-25 14:26:29.035166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.531 qpair failed and we were unable to recover it. 00:24:59.531 [2024-07-25 14:26:29.035401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.035461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.035777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.035841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.036116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.036187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.036392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.036450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.036723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.036787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.037029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.037131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.037407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.037466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.037737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.037812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.038051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.038150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.038379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.038454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.038718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.038782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.039108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.039177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.039439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.039499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.039806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.039870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.040186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.040250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.040485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.040546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.040872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.040938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.041206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.041268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.041561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.041632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.041922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.041989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.042288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.042350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.042609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.042669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.042933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.043358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.043419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.043710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.043777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.044111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.044173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.044392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.044451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.044749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.044814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.045122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.045183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.045471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.045533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.045849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.045913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.046220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.046294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.046585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.046650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.046975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.047049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.047379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.047746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.047811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.048080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.048150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.048441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.048520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.048758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.048824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.049092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.049159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.049421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.049488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.049789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.049851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.050115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.050193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.050466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.050534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.050840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.532 [2024-07-25 14:26:29.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.532 qpair failed and we were unable to recover it. 00:24:59.532 [2024-07-25 14:26:29.051199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.051267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.051520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.051586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.051875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.051941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.052190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.052258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.052529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.052600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.052899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.052963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.053230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.053297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.053571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.053634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.053902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.053983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.054267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.054334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.054590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.054668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.054932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.054996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.055314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.055381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.055660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.055741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.056043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.056150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.056419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.056484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.056784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.056848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.057159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.057228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.057489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.057552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.057819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.057897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.058206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.058272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.058565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.058635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.058875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.058939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.059207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.059273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.059570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.059637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.059833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.059896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.060189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.060263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.060553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.060618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.060914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.060987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.061263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.533 [2024-07-25 14:26:29.061331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.533 qpair failed and we were unable to recover it. 00:24:59.533 [2024-07-25 14:26:29.061606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.061674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.061954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.062021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.062278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.062344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.062595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.062678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.062905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.062970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.063254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.063321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.063651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.063728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.064025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.064127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.064431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.064495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.064744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.064807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.065006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.065096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.065351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.065417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.065626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.065689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.065994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.066057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.066348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.066412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.066705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.066767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.067081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.067148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.067443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.067506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.067757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.067821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.068120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.068187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.068479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.068543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.068792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.068855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.069102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.069167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.069422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.069496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.069758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.069822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.070139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.070204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.070511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.070576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.534 [2024-07-25 14:26:29.070832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.534 [2024-07-25 14:26:29.070895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.534 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.071194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.071259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.071573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.071637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.071886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.071948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.072172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.072239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.072492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.072555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.072810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.072872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.073165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.073231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.073498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.073560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.073852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.073914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.074235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.074301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.074609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.074672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.074964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.075027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.075360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.075425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.075644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.075707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.075898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.075961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.076237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.076303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.076566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.076629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.076876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.076939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.077248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.077314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.077575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.077639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.535 qpair failed and we were unable to recover it. 00:24:59.535 [2024-07-25 14:26:29.077881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.535 [2024-07-25 14:26:29.077944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.078240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.078305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.078616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.078681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.078929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.078992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.079270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.079336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.079554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.079620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.079878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.079945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.080265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.080330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.080631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.080694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.080900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.080963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.081174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.081240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.081487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.081553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.081838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.081903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.082155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.082220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.082482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.082546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.082797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.082859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.083139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.083204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.083507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.083570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.083865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.083928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.084199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.084266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.084566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.084629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.084872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.084937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.085240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.085305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.085560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.085625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.085918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.085982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.086262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.086326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.086622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.086685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.086937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.087001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.087385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.087448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.087750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.087814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.088077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.088143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.088401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.088467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.088717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.088780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.089082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.089146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.089452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.089516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.536 [2024-07-25 14:26:29.089810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.536 [2024-07-25 14:26:29.089873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.536 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.090139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.090204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.090431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.090495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.090740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.090804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.091072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.091135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.091383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.091446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.091743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.091807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.092055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.092156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.092374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.092440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.092731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.093089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.093153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.093406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.093472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.093728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.093791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.094010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.094092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.094390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.094453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.094655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.094720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.094969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.095034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.095355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.095420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.095677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.095740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.096039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.096121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.096342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.096407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.096647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.096713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.097092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.097412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.097475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.097682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.097749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.098044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.098127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.098381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.098444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.098689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.098755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.099054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.099153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.099447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.099510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.099711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.099776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.100079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.100143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.100413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.100477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.100790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.100854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.101123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.101188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.101441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.101506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.101767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.101831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.102127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.102191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.102477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.102540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.102768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.102832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.103073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.103137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.103431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.103494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.103743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.103810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.104107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.104172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.104466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.104529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.104802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.104864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.105123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.105191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.105459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.105533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.105790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.105854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.106110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.106177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.106425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.106488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.537 [2024-07-25 14:26:29.106731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.537 [2024-07-25 14:26:29.106794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.537 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.107049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.107125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.107378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.107442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.107743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.107805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.108111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.108175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.108432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.108494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.108703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.108767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.109097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.109162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.109416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.109480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.109737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.109800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.110109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.110174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.110411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.110476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.110765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.110828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.111035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.111113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.111360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.111422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.111720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.111783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.112077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.112141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.112410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.112475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.112690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.112755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.113007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.113086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.113339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.113403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.113703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.113766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.114014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.114095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.114357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.114421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.114685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.114748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.114995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.115056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.115331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.115393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.115621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.115685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.115980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.116042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.116329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.116392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.116665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.116728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.117026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.117134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.117400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.117465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.117737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.117799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.118081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.118146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.118401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.118463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.118676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.118749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.118999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.119080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.119332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.119395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.119694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.119756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.119999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.120077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.120349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.120413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.538 [2024-07-25 14:26:29.120681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.538 [2024-07-25 14:26:29.120747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.538 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.121041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.121145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.121439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.121504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.121759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.121825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.122123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.122188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.122446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.122515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.122769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.122833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.123077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.123142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.123418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.123484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.123700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.123763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.124009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.124089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.124299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.124365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.124669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.124733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.124947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.125009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.125282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.125348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.125563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.125628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.125926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.125989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.126229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.126296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.126544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.126607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.126901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.126965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.127263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.127329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.127579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.127643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.127858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.127924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.128229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.128295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.128597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.128660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.128909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.128971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.129229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.129295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.129596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.129659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.129959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.130022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.130295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.130359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.130658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.130721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.130971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.131034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.131311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.131377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.813 [2024-07-25 14:26:29.131692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.813 [2024-07-25 14:26:29.131756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.813 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.132056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.132148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.132428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.132492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.132790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.132853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.133075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.133140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.133365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.133429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.133640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.133703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.133959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.134025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.134287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.134352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.134608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.134673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.134926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.134989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.135318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.135390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.135685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.135749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.136047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.136130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.136409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.136473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.136776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.136840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.137104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.137169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.137425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.137488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.137743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.137806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.138112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.138178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.138436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.138498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.138754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.138817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.139110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.139175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.139427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.139490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.139701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.139766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.140033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.140112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.140379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.140444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.140697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.140763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.141079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.141145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.141450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.141513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.141810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.141874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.142079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.142145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.142477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.142546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.142819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.142883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.143113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.143199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.814 qpair failed and we were unable to recover it. 00:24:59.814 [2024-07-25 14:26:29.143466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.814 [2024-07-25 14:26:29.143533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.143861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.143926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.144198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.144265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.144479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.144558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.144797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.144864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.145117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.145183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.145391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.145480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.145795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.145859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.146159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.146232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.146516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.146580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.146790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.146872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.147194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.147260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.147521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.147596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.147831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.147896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.148128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.148196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.148423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.148489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.148716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.148780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.149089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.149167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.149526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.149593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.149851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.149921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.150200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.150267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.150466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.150549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.150808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.150875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.151131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.151199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.151408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.151489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.151774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.151839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.152097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.152178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.152473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.152539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.152793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.152856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.153124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.153190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.153448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.153512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.153762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.153828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.154084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.154152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.154464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.815 [2024-07-25 14:26:29.154781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.815 [2024-07-25 14:26:29.154851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.815 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.155120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.155186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.155478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.155544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.155843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.155921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.156208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.156276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.156533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.156598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.156875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.156940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.157234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.157300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.157611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.157678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.157887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.157952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.158307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.158389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.158676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.158741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.159090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.159168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.159446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.159524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.159834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.159898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.160195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.160263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.160582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.160648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.160942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.161020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.161357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.161425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.161699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.161776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.162074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.162141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.162397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.162472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.162726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.162792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.163035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.163118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.163373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.163444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.163656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.163719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.163986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.164083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.164399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.164467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.164767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.164838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.165148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.165214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.165490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.165556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.165835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.165900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.166157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.166237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.166550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.166613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.166882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.816 [2024-07-25 14:26:29.166954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.816 qpair failed and we were unable to recover it. 00:24:59.816 [2024-07-25 14:26:29.167277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.167344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.167598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.167661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.167976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.168041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.168340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.168409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.168724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.168790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.169094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.169160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.169400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.169464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.169728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.169791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.170107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.170173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.170422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.170485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.170703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.170784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.171039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.171121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.171433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.171499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.171719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.171786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.172005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.172087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.172358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.172426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.172731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.172795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.173008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.173116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.173428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.173494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.173792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.173857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.174152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.174219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.174477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.174546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.174864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.174928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.175147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.175213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.175520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.175585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.175843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.175906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.176217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.176285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.176551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.176615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.176883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.176966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.177258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.177326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.177549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.177623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.177920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.177986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.817 [2024-07-25 14:26:29.178256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.817 [2024-07-25 14:26:29.178321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.817 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.178587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.178654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.178918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.178982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.179277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.179357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.179631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.179694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.179956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.180036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.180336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.180402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.180664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.180739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.181021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.181112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.181337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.181405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.181709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.181773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.182029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.182131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.182426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.182491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.182707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.182772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.183106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.183175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.183431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.183497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.183766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.183832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.184112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.184178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.184368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.184441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.184764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.184829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.185130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.185197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.185498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.185563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.185833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.185901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.186188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.186255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.186527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.186590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.186832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.186920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.187233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.187300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.187603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.187668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.187967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.188030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.188345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.188412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.188717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.188780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.189032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.818 [2024-07-25 14:26:29.189120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.818 qpair failed and we were unable to recover it. 00:24:59.818 [2024-07-25 14:26:29.189391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.189455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.189759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.189831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.190089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.190154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.190402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.190476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.190747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.190811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.191120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.191197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.191443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.191508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.191819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.191884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.192149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.192215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.192437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.192499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.192753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.192824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.193148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.193214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.193470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.193533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.193759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.193826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.194136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.194203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.194457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.194529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.194788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.194851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.195057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.195156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.195458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.195525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.195778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.195842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.196101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.196179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.196419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.196483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.196732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.196807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.197107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.197175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.197392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.197457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.197746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.197811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.198123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.198188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.198411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.198488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.198753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.198817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.199085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.199165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.199476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.199540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.199771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.819 [2024-07-25 14:26:29.199837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.819 qpair failed and we were unable to recover it. 00:24:59.819 [2024-07-25 14:26:29.200098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.200165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.200436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.200510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.200776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.200853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.201166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.201231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.201496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.201566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.201856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.201920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.202219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.202284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.202559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.202627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.202915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.202978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.203317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.203384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.203641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.203707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.203966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.204032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.204357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.204422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.204694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.204773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.205100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.205166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.205466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.205545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.205850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.205913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.206214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.206294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.206541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.206607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.206850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.206916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.207194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.207259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.207528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.207596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.207858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.207920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.208213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.208279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.208581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.208644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.208906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.208969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.209280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.209344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.209607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.209673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.209988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.210051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.210325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.210391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.210703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.210765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.210987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.211049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.211372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.211435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.211733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.211794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.820 [2024-07-25 14:26:29.212005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.820 [2024-07-25 14:26:29.212089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.820 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.212388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.212452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.212706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.212768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.213057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.213161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.213449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.213511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.213801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.213864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.214116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.214183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.214473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.214545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.214852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.214915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.215173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.215238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.215479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.215541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.215734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.215797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.216037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.216124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.216423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.216486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.216713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.216776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.217055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.217134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.217382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.217445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.217742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.217805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.218053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.218135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.218378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.218441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.218733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.218797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.219055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.219141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.219396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.219459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.219705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.219769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.220057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.220139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.220421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.220485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.220776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.220838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.221134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.221200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.221448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.221512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.221780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.221843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.222081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.222147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.222402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.222466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.222758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.222820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.223134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.223199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.223425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.223490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.821 [2024-07-25 14:26:29.223783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.821 [2024-07-25 14:26:29.223846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.821 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.224055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.224138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.224426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.224489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.224693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.224756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.224974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.225038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.225315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.225381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.225591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.225657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.225879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.225942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.226200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.226265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.226518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.226583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.226839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.226901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.227189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.227253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.227509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.227582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.227860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.227922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.228215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.228279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.228601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.228664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.228972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.229035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.229322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.229386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.229625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.229691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.229976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.230039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.230356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.230419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.230718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.230781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.231084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.231148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.231441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.231505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.231798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.231861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.232129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.232194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.232455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.232519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.232816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.232878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.233138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.233203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.233448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.233510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.233767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.233829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.234136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.234201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.234479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.822 [2024-07-25 14:26:29.234542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.822 qpair failed and we were unable to recover it. 00:24:59.822 [2024-07-25 14:26:29.234731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.234793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.235042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.235121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.235425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.235489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.235745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.235808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.236087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.236151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.236403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.236465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.236746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.236810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.237019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.237116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.237416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.237480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.237735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.237798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.238050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.238132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.238396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.238460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.238745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.238808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.239057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.239140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.239445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.239509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.239753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.239819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.240079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.240144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.240395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.240458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.240753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.240816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.241115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.241189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.241459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.241523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.241826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.241889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.242194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.242258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.242554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.242616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.242874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.242937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.243185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.243249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.243544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.243607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.243864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.243926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.823 [2024-07-25 14:26:29.244166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.823 [2024-07-25 14:26:29.244230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.823 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.244517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.244580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.244832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.244897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.245135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.245200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.245456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.245521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.245838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.245901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.246144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.246208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.246508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.246573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.246881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.246943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.247156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.247222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.247477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.247542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.247845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.247907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.248201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.248265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.248559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.248621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.248917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.248980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.249280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.249345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.249636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.249700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.249957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.250019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.250340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.250405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.250624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.250689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.250948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.251010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.251329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.251393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.251681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.251744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.252008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.252087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.252388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.252452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.252706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.252768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.253027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.253106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.253368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.253432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.253726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.253788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.254017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.254099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.254368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.254431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.254693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 14:26:29.254756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.824 qpair failed and we were unable to recover it. 00:24:59.824 [2024-07-25 14:26:29.254994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.255084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.255297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.255361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.255569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.255635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.255900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.255962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.256233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.256298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.256556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.256622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.256870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.256932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.257219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.257284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.257537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.257599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.257811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.257874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.258131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.258196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.258391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.258453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.258693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.258756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.259104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.259170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.259480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.259544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.259792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.259855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.260160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.260224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.260519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.260581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.260895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.260957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.261182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.261247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.261497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.261562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.261776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.261841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.262076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.262142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.262443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.262505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.262800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.262863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.263120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.263185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.263479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.263553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.263874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.263936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.264154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.264218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.264434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.264499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.264799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.264862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.265171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.265234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.265492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.265555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.265804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.265869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.825 [2024-07-25 14:26:29.266163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 14:26:29.266228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.825 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.266473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.266536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.266828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.266889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.267111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.267175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.267412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.267475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.267791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.267853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.268169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.268234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.268475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.268540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.268800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.268864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.269106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.269170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.269466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.269528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.269787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.269852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.270118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.270183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.270432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.270496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.270796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.270858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.271113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.271178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.271427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.271490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.271777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.271839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.272098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.272161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.272430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.272493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.272791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.272853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.273148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.273213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.273461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.273523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.273739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.273802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.274045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.274131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.274392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.274455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.274696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.274974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.275040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.275309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.275369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.275675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.275738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.276005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.276083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.826 [2024-07-25 14:26:29.276373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 14:26:29.276436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.826 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.276650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.276723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.277023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.277103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.277407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.277471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.277729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.277791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.278085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.278149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.278361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.278424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.278714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.278776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.279026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.279118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.279410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.279473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.279737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.279803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.280098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.280163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.280385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.280448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.280663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.280725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.280944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.281006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.281243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.281307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.281525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.281588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.281872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.281935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.282191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.282256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.282563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.282625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.282882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.282945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.283229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.283293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.283554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.283619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.283821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.283884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.284150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.284214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.284462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.284525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.284819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.284882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.285138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.285202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.285471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.285534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.285800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.285863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.286158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.286223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.286534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.827 [2024-07-25 14:26:29.286596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.827 qpair failed and we were unable to recover it. 00:24:59.827 [2024-07-25 14:26:29.286853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.286916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.287216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.287280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.287539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.287604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.287869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.287932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.288235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.288300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.288563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.288625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.288889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.288951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.289257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.289321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.289627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.289690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.289872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.289944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.290230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.290519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.290582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.290839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.290901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.291201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.291266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.291539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.291602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.291906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.291968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.292230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.292295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.292586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.292648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.292868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.292933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.293141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.293210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.293500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.293564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.293811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.293874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.294165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.294228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.294516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.294580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.294841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.294904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.295201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.828 [2024-07-25 14:26:29.295266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.828 qpair failed and we were unable to recover it. 00:24:59.828 [2024-07-25 14:26:29.295528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.295895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.295958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.296169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.296234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.296533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.296595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.296795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.296861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.297146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.297211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.297509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.297572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.297832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.297896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.298155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.298219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.298519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.298582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.298883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.298947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.299153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.299219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.299479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.299545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.299807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.299870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.300125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.300189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.300480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.300542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.300806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.300869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.301111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.301176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.301385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.301451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.301720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.301783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.302030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.302105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.302398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.302461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.302716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.302779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.303088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.303162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.303372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.303435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.303703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.303766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.304074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.304138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.304390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.304456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.304717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.304780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.305082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.305146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.305410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.305474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.305737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.305799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.306005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.306084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.306382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.306444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.829 [2024-07-25 14:26:29.306743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.829 [2024-07-25 14:26:29.306805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.829 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.307100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.307183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.307435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.307498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.307769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.307831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.308135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.308200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.308491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.308555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.308852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.308915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.309113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.309177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.309433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.309496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.309791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.309855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.310157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.310221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.310483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.310547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.310846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.310909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.311190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.311254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.311560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.311623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.311881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.311944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.312266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.312331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.312640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.312703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.312909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.312970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.313293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.313358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.313617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.313680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.313872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.313935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.314178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.314242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.314547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.314610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.314877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.314939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.315198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.315262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.315561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.315624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.315835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.315901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.316194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.316258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.316555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.316628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.316882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.316947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.317168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.317234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.317530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.317592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.317898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.317960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.318229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.318296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.830 qpair failed and we were unable to recover it. 00:24:59.830 [2024-07-25 14:26:29.318544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.830 [2024-07-25 14:26:29.318609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.318901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.318964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.319185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.319251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.319544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.319606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.319850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.319912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.320153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.320218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.320443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.320506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.320762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.320825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.321113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.321179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.321471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.321533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.321839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.321902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.322195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.322260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.322571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.322632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.322892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.322954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.323216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.323280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.323482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.323547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.323775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.323838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.324133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.324197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.324501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.324564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.324864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.324926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.325195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.325261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.325568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.325633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.325940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.326002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.326307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.326371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.326623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.326686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.326937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.327001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.327271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.327336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.327560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.327624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.327836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.327901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.328197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.328263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.328563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.328624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.328819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.328882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.329113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.329178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.329429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.329494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.329753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.831 [2024-07-25 14:26:29.329829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.831 qpair failed and we were unable to recover it. 00:24:59.831 [2024-07-25 14:26:29.330122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.330188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.330454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.330517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.330769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.330832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.331095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.331161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.331477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.331540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.331806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.331871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.332129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.332195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.332432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.332496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.332757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.332819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.333106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.333171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.333459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.333522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.333727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.333793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.334097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.334162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.334448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.334511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.334803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.334865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.335120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.335187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.335448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.335510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.335770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.335833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.336092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.336157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.336415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.336480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.336739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.336802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.337031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.337118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.337369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.337432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.337736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.337799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.338087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.338152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.338451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.338514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.338728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.338793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.339040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.339120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.339422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.339486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.339726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.339790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.340090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.340154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.340439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.340697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.340761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.341050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.341142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.341395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.832 [2024-07-25 14:26:29.341457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.832 qpair failed and we were unable to recover it. 00:24:59.832 [2024-07-25 14:26:29.341675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.341737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.342040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.342124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.342374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.342437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.342725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.342787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.343038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.343130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.343388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.343453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.343767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.343831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.344122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.344188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.344438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.344501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.344755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.344818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.345105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.345169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.345478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.345541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.345773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.345836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.346049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.346129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.346402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.346466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.346771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.346833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.347093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.347160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.347452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.347774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.347838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.348098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.348164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.348418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.348485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.348787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.348850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.349154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.349218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.349464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.349529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.349777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.349840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.350142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.350205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.350445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.350760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.350826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.351082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.351146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.833 qpair failed and we were unable to recover it. 00:24:59.833 [2024-07-25 14:26:29.351383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.833 [2024-07-25 14:26:29.351445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.351650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.351715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.351974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.352038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.352370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.352433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.352686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.352749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.353036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.353118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.353370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.353435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.353683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.353746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.353994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.354078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.354294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.354360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.354615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.354678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.354929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.354992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.355210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.355275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.355527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.355589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.355852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.355916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.356172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.356249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.356465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.356528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.356733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.356796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.357104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.357168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.357461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.357525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.357820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.357883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.358144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.358209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.358469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.358719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.358782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.359033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.359112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.359407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.359470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.359761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.359824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.360086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.360152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.360421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.360484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.360749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.361022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.361113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.361378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.361442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.361729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.361792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.362043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.362125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.362420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.834 [2024-07-25 14:26:29.362485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.834 qpair failed and we were unable to recover it. 00:24:59.834 [2024-07-25 14:26:29.362772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.362835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.363093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.363157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.363409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.363473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.363723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.363787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.364027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.364108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.364340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.364403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.364693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.364755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.365086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.365152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.365411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.365474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.365735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.365799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.366091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.366156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.366409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.366474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.366773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.366835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.367094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.367159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.367404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.367468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.367760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.367823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.368129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.368193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.368454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.368520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.368733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.368797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.369018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.369095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.369397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.369470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.369729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.369794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.370069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.370136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.370429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.370492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.370796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.370858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.371104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.371169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.371439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.371502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.371752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.371815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.372081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.372148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.372458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.372520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.372731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.373054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.373130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.373441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.373504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.373790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.373853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.374127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.835 [2024-07-25 14:26:29.374193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.835 qpair failed and we were unable to recover it. 00:24:59.835 [2024-07-25 14:26:29.374442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.374505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.374786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.374850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.375146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.375212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.375511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.375575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.375864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.375927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.376193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.376259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.376492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.376556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.376842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.376905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.377191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.377255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.377456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.377521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.377771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.377835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.378106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.378172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.378431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.378497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.378764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.378826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.379104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.379171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.379417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.379480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.379770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.379833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.380141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.380206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.380456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.380519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.380766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.380829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.381080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.381145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.381373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.381435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.381685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.381747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.381962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.382026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.382311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.382376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.382676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.382749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.383011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.383094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.383305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.383369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.383619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.383682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.383886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.383950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.384255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.384320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.384619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.384683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.384882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.384948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.385258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.385324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.836 [2024-07-25 14:26:29.385618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.836 [2024-07-25 14:26:29.385680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.836 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.385933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.385997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.386314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.386379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.386600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.386663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.386889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.386952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.387190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.387258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.387550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.387612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.387867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.387930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.388177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.388243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.388535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.388597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.388908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.388970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.389244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.389308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.389599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.389661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.389907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.389973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.390251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.390317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.390607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.390670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.390960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.391023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.391329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.391392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.391698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.391763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.392026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.392108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.392355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.392418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.392744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.392811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.393117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.393184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.393483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.393549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.393862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.393925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.394228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.394310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.394535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.394601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.394855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.394938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.395262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.395329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.395547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.395617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.395916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.395982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.396300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.396377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.396641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.396708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.397006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.397093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.397416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.397482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.837 qpair failed and we were unable to recover it. 00:24:59.837 [2024-07-25 14:26:29.397735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.837 [2024-07-25 14:26:29.397799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.398056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.398165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.398464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.398528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.398796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.398876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.399189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.399255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.399552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.399616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.399944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.400010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.400291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.400356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.400673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.400740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.400943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.401008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.401326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.401393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.401691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.401755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.402007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.402094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.402416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.402480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.402689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.402754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.403001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.403087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.403348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.403413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.403664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.403742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.403989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.404055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.404388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.404454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.404708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.404772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.405084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.405167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.405392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.405456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.405726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.405790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.406054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.406159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.406464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.406529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.406792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.406859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.407114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.407182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.407396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.407470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.407795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.407859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.838 qpair failed and we were unable to recover it. 00:24:59.838 [2024-07-25 14:26:29.408109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.838 [2024-07-25 14:26:29.408190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.408502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.408567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.408848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.408911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.409167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.409235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.409496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.409560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.409819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.409894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.410137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.410218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.410529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.410592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.410925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.410990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.411265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.411345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.411574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.411640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.411894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.411957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.412186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.412264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.412538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.412602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.412903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.412982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.413251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.413317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.413583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.413649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.413914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.413979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.414228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.414294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.414598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.414674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.414904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.414968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.415283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.415349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.415674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.415738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.415998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.416086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.416409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.416475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.416739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.416801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.417093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.417159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.417417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.417482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.417705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.417772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.418028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.418136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.418409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.418492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.418798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.418861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.419120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.419200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.419489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.419556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.419848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.839 [2024-07-25 14:26:29.419928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.839 qpair failed and we were unable to recover it. 00:24:59.839 [2024-07-25 14:26:29.420235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.420301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.420513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.420595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.420906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.420971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.421249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.421316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.421625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.421691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.421985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.422048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.422359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.422435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.422664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.422727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.422981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.423082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.423357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.423421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.423665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.423744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.423974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.424040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.424389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.424454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.424685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.424750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.424974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.425038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.425372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.425448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.425675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.425739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.425957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.426022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.426341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.426409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.426687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.426750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.427012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.427110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.427411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.427475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.427780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.427860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.428140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.428207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.428475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.428552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.428840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.428906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.429165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.429231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.429512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.429577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.429811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.429874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.430165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.430247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.430523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.430586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.430845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.430907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.431189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.431257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.431519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.431583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.840 [2024-07-25 14:26:29.431851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.840 [2024-07-25 14:26:29.431916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.840 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.432140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.432206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.432460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.432532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.432751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.432816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.433105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.433182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.433458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.433524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.433791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.433855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.434147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.434222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.434519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.434583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.434832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.434900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.435128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.435195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.435493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.435556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.435838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.435902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.436193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.436260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.436540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.436605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.436847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.436910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.437157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.437222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.437498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.437565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.437840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.438166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.438234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.438534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.438597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.438903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.438969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.439203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.439271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.439576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.439647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.439928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.439992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.440295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.440361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.440651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.440716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.440996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.441081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.441346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.441686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.441750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.442051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.442141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.442457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.442522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.442784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.442861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.443148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.443215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.443508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.443590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.841 [2024-07-25 14:26:29.443895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.841 [2024-07-25 14:26:29.443959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.841 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.444232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.444309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.444550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.444616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.444881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.444946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.445202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.445285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.445562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.445627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.445884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.445947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.446201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.446269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.446531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.446595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.446807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.446897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.447140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.447207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.447470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.447542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.447796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.447861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.448120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.448185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.448424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.448491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.448744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.448808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.449005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.449105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:24:59.842 [2024-07-25 14:26:29.449388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.842 [2024-07-25 14:26:29.449453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:24:59.842 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.449750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.449818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.450122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.450190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.450486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.450550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.450784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.450849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.451118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.451185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.451459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.451525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.451844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.451908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.452237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.452302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.452573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.452639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.452859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.452921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.453155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.453232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.453509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.453573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.453834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.453900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.454199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.454265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.454520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.454587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.454862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.454927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.455205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.455271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.455567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.455632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.455863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.116 [2024-07-25 14:26:29.455929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.116 qpair failed and we were unable to recover it. 00:25:00.116 [2024-07-25 14:26:29.456184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.456263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.456581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.456646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.456904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.456985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.457284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.457352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.457632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.457710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.457951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.458016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.458332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.458397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.458606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.458686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.458994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.459079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.459311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.459378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.459658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.459723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.459982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.460044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.460356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.460440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.460739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.460802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.461081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.461150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.461439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.461502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.461733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.461802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.462084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.462152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.462454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.462518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.462775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.462842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.463108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.463176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.463446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.463512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.463769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.463835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.464073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.464149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.464418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.464484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.464728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.464793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.465114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.465182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.465436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.465501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.465788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.465855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.117 qpair failed and we were unable to recover it. 00:25:00.117 [2024-07-25 14:26:29.466158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.117 [2024-07-25 14:26:29.466225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.466489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.466554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.466824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.466888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.467165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.467234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.467486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.467551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.467805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.467870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.468199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.468266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.468523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.468594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.468867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.468934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.469159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.469225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.469518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.469584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.469838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.469904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.470133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.470200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.470464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.470530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.470752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.470816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.471095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.471162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.471434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.471498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.471806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.471871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.472115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.472182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.472399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.472466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.472781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.472845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.473142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.473221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.473539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.473605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.473910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.473984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.474327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.474662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.474740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.475020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.475110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.475426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.475491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.475749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.475814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.118 [2024-07-25 14:26:29.476041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.118 [2024-07-25 14:26:29.476128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.118 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.476394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.476464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.476731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.476793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.476980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.477043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.477370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.477436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.477655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.477718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.477938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.478016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.478361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.478426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.478694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.478764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.479032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.479125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.479428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.479511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.479830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.479894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.480152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.480232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.480517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.480582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.480843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.480917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.481177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.481258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.481528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.481591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.481900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.481965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.482303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.482371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.482634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.482705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.482957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.483021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.483362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.483444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.483773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.483838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.484144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.484211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.484491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.484557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.484865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.484928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.485161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.485231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.485517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.485584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.485889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.485954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.486222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.486288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.486530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.486596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.486838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.486903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.119 [2024-07-25 14:26:29.487133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.119 [2024-07-25 14:26:29.487198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.119 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.487458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.487538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.487852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.487928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.488159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.488235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.488507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.488572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.488850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.488914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.489170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.489238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.489499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.489562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.489810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.489887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.490166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.490233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.490523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.490591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.490888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.490952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.491192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.491259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.491526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.491592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.491847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.491910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.492140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.492220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.492536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.492601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.492850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.492918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.493191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.493257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.493510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.493576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.493843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.493909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.494204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.494270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.494521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.494598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.494896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.494960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.495201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.495275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.495559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.495625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.495882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.495946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.496196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.496267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.496507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.496570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.496889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.496964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.497259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.497325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.497589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.497657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.497896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.497961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.498237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.498303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.120 [2024-07-25 14:26:29.498543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.120 [2024-07-25 14:26:29.498619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.120 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.498880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.498943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.499245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.499312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.499567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.499631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.499922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.499997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.500216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.500282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.500532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.500594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.500879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.500943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.501229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.501305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.501619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.501684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.501938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.502002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.502318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.502385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.502679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.502747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.503092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.503159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.503415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.503482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.503775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.503840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.504043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.504136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.504432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.504511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.504770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.504834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.505047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.505144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.505417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.505484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.505780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.505858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.506157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.506226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.506471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.506534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.506833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.506898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.507145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.507211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.507518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.507586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.507860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.121 [2024-07-25 14:26:29.507924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.121 qpair failed and we were unable to recover it. 00:25:00.121 [2024-07-25 14:26:29.508192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.508272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.508529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.508593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.508847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.508910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.509172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.509249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.509530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.509594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.509812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.509889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.510129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.510196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.510479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.510543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.510842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.510909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.511134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.511200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.511494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.511560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.511854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.511917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.512148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.512225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.512538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.512603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.512856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.512921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.513244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.513312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.513572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.513640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.513927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.513991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.514288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.514355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.514590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.514657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.514914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.514990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.515286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.515354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.515579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.515642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.515936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.516012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.516294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.516361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.516617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.516695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.516982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.517047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.517299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.517365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.517630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.517697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.517975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.518039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.518293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.518369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.518599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.518662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.518949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.519023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.519329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.519393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.122 [2024-07-25 14:26:29.519626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.122 [2024-07-25 14:26:29.519690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.122 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.519920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.519985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.520273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.520339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.520598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.520680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.520931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.520995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.521246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.521312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.521548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.521624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.521916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.521980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.522245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.522313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.522593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.522661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.522910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.522976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.523311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.523380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.523634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.523698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.523998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.524088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.524344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.524408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.524678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.524751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.525076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.525144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.525387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.525451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.525783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.525847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.526046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.526134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.526443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.526510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.526726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.526790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.527056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.527145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.527449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.527513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.527767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.527831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.528116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.528183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.528442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.528521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.528782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.528849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.529076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.529149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.529445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.529526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.529790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.529855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.530129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.530212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.530473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.530538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.530757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.530833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.531086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.123 [2024-07-25 14:26:29.531155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.123 qpair failed and we were unable to recover it. 00:25:00.123 [2024-07-25 14:26:29.531410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.531475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.531733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.531813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.532084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.532149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.532447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.532519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.532785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.532852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.533122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.533198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.533470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.533535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.533825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.533900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.534171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.534237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.534493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.534560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.534812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.534878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.535094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.535159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.535386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.535450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.535731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.535797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.536100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.536166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.536405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.536472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.536730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.536793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.537022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.537112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.537409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.537475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.537713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.537776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.538117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.538185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.538485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.538554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.538808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.538873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.539107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.539172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.539430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.539502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.539758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.539824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.540099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.540178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.540427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.540496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.540760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.540824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.541116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.541184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.541437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.541499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.541755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.541832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.542155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.542223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.542468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.542546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.124 [2024-07-25 14:26:29.542825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.124 [2024-07-25 14:26:29.542889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.124 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.543151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.543238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.543551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.543615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.543932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.543995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.544248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.544315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.544621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.544685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.544982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.545055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.545298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.545364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.545639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.545709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.545985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.546048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.546329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.546393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.546655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.546724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.546975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.547039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.547375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.547441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.547654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.547720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.548031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.548122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.548365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.548430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.548730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.548794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.549046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.549133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.549329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.549392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.549682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.549748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.550041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.550128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.550428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.550494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.550750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.550813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.551093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.551167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.551424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.551490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.551760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.551823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.552145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.552213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.552518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.552593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.552829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.552895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.553167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.553232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.553464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.553548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.553856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.553921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.554191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.554260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.554509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.125 [2024-07-25 14:26:29.554573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.125 qpair failed and we were unable to recover it. 00:25:00.125 [2024-07-25 14:26:29.554810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.554873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.555118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.555185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.555481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.555555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.555780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.555846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.556077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.556146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.556444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.556507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.556807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.556870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.557094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.557444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.557507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.557819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.557882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.558188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.558255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.558514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.558579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.558793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.558856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.559107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.559172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.559429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.559492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.559754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.559817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.560087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.560152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.560401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.560465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.560679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.560743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.560948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.561013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.561283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.561349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.561611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.561674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.561984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.562046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.562403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.562467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.562737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.562799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.563079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.563149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.563443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.563506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.563711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.563775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.126 qpair failed and we were unable to recover it. 00:25:00.126 [2024-07-25 14:26:29.564083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.126 [2024-07-25 14:26:29.564149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.564458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.564523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.564828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.565152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.565218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.565518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.565582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.565890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.565952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.566262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.566328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.566585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.566648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.566916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.566980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.567271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.567337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.567645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.567708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.567934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.567998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.568263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.568327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.568590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.568653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.568925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.568998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.569322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.569387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.569694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.569757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.570015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.570116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.570422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.570486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.570777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.570841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.571136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.571203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.571425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.571487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.571776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.571839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.572098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.572164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.572356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.572419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.572714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.572777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.573055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.573135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.573428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.573491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.573755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.573820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.574106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.574172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.574381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.574446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.574711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.574775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.575023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.575111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.575350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.575415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.127 qpair failed and we were unable to recover it. 00:25:00.127 [2024-07-25 14:26:29.575675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.127 [2024-07-25 14:26:29.575740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.575992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.576055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.576345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.576409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.576699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.576762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.577011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.577098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.577405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.577468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.577758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.577821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.578022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.578128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.578401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.578465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.578762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.578825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.579091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.579156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.579404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.579469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.579774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.579837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.580100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.580166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.580381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.580447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.580745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.580808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.581018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.581096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.581398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.581461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.581709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.581771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.582018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.582104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.582355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.582429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.582722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.582785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.583043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.583119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.583420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.583483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.583745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.583810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.584094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.584160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.584402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.584465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.584716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.584782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.585000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.585088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.585348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.585413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.585678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.585742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.585999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.586078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.586324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.586387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.586678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.586742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.586990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.128 [2024-07-25 14:26:29.587055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.128 qpair failed and we were unable to recover it. 00:25:00.128 [2024-07-25 14:26:29.587338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.587402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.587701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.587764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.588025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.588109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.588324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.588387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.588673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.588736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.589047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.589141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.589453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.589517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.589773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.589836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.590096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.590163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.590419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.590485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.590734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.590798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.591101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.591166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.591435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.591501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.591799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.591862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.592123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.592188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.592449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.592512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.592821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.592883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.593107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.593174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.593470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.593534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.593786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.593849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.594114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.594181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.594431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.594495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.594742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.594806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.595045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.595123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.595367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.595432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.595680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.595746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.596074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.596139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.596395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.596709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.596773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.597030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.597123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.597378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.597445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.597742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.597807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.598088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.129 [2024-07-25 14:26:29.598154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.129 qpair failed and we were unable to recover it. 00:25:00.129 [2024-07-25 14:26:29.598411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.598474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.598667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.598730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.598981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.599044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.599317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.599383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.599661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.599723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.599968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.600032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.600296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.600362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.600612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.600677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.600923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.600987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.601267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.601333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.601582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.601645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.601848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.601912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.602177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.602243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.602495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.602558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.602778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.602842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.603099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.603163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.603404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.603468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.603725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.603787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.604013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.604093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.604344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.604418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.604718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.604782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.605110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.605175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.605430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.605492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.605751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.605814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.606089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.606154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.606408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.606470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.606732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.606795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.607097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.607163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.607462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.607524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.607784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.607849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.608179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.608244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.608487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.608549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.608865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.608928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.130 [2024-07-25 14:26:29.609231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.130 [2024-07-25 14:26:29.609296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.130 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.609600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.609662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.609904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.609967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.610229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.610294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.610552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.610615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.610870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.610933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.611235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.611300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.611494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.611559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.611856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.611919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.612129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.612194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.612479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.612542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.612834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.612897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.613152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.613216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.613440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.613503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.613755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.613818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.614121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.614185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.614474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.614540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.614836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.614899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.615153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.615217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.615507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.615570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.615827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.615890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.616091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.616156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.616381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.616445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.616692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.616755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.616962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.617025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.617299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.617365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.617659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.617831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.618133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.618499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.618562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.618814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.618879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.619204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.619271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.131 [2024-07-25 14:26:29.619479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.131 [2024-07-25 14:26:29.619545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.131 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.619797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.619862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.620164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.620229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.620493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.620556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.620803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.620868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.621163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.621228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.621528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.621592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.621895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.621957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.622237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.622302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.622608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.622672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.622968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.623031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.623353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.623417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.623709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.623772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.624080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.624146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.624436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.624499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.624792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.624854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.625101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.625167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.625415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.625480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.625744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.625807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.626080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.626145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.626448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.626511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.626766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.627149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.627215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.627468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.627530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.627752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.627815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.628131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.628196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.628406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.628468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.628751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.132 [2024-07-25 14:26:29.628813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.132 qpair failed and we were unable to recover it. 00:25:00.132 [2024-07-25 14:26:29.629084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.629149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.629400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.629463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.629754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.629817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.630029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.630105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.630403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.630467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.630718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.631038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.631122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.631414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.631488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.631735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.631799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.632010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.632090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.632323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.632634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.632697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.632949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.633015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1020296 Killed "${NVMF_APP[@]}" "$@" 00:25:00.133 [2024-07-25 14:26:29.633305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.633371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.633629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.633692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:00.133 [2024-07-25 14:26:29.633953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:00.133 [2024-07-25 14:26:29.634334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.634399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.133 [2024-07-25 14:26:29.634653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.634716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:00.133 [2024-07-25 14:26:29.634981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.635057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.635344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.635410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.635706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.635770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.636091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.636156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.636407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.636471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.636765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.636827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.637132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.637198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.637460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.637523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.637808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.637872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.638174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.638239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.638528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.638591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.638895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.638958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.133 qpair failed and we were unable to recover it. 00:25:00.133 [2024-07-25 14:26:29.639232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.133 [2024-07-25 14:26:29.639299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.639601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.639675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.639972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.640035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1020852 00:25:00.134 [2024-07-25 14:26:29.640343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:00.134 [2024-07-25 14:26:29.640408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1020852 00:25:00.134 [2024-07-25 14:26:29.640660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.640726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1020852 ']' 00:25:00.134 [2024-07-25 14:26:29.640972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.641037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.134 [2024-07-25 14:26:29.641345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:00.134 [2024-07-25 14:26:29.641410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.134 [2024-07-25 14:26:29.641674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.134 [2024-07-25 14:26:29.641738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:00.134 [2024-07-25 14:26:29.642043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 14:26:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.134 [2024-07-25 14:26:29.642348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.642408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.642715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.642784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.643041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.643135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.643410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.643476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.643809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.643875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.644167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.644246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.644524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.644593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.644898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.644982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.645218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.645286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.645555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.645619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.645897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.645965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.646276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.646343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.646599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.646671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.646910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.646975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.134 [2024-07-25 14:26:29.647248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.134 [2024-07-25 14:26:29.647332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.134 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.647609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.647676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.647928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.647993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.648305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.648372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.648625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.648689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.648995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.649085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.649344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.649408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.649709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.649786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.650018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.650106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.650407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.650473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.650750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.650816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.651052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.651140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.651400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.651466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.651782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.651845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.652149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.652217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.652522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.652585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.652840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.652913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.653175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.653547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.653620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.653863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.653928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.654139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.654206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.654464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.654530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.654755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.654818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.655118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.655197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.655490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.655557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.655807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.655869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.656118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.656203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.135 qpair failed and we were unable to recover it. 00:25:00.135 [2024-07-25 14:26:29.656520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.135 [2024-07-25 14:26:29.656585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.656789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.656867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.657121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.657190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.657436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.657500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.657789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.657855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.658175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.658242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.658502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.658579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.658842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.658906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.659211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.659278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.659545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.659609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.659910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.659973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.660250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.660319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.660616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.660679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.660951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.661034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.661330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.661395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.661709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.661775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.662090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.662156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.662407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.662485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.662752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.662817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.663111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.663182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.663419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.663485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.663744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.663807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.664084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.664447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.664510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.664808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.664879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.665149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.665216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.665447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.665512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.665755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.665823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.666091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.666159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.666448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.666528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.666822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.666887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.667125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.667191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.136 qpair failed and we were unable to recover it. 00:25:00.136 [2024-07-25 14:26:29.667483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.136 [2024-07-25 14:26:29.667549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.667804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.667870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.668121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.668200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.668508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.668572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.668867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.668933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.669142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.669208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.669476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.669554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.669847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.669913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.670181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.670248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.670499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.670565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.670858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.670923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.671250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.671318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.671552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.671620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.671869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.671943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.672288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.672354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.672632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.672702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.672953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.673018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.673348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.673412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.673717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.673784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.674034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.674130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.674346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.674425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.674702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.674776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.675035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.675149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.675388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.675453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.137 [2024-07-25 14:26:29.675696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.137 [2024-07-25 14:26:29.675762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.137 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.676042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.676152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.676376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.676439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.676686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.676761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.676987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.677050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.677344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.677413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.677696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.677761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.678006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.678095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.678371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.678436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.678743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.678807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.679106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.679174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.679448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.679512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.679816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.679882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.680187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.680459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.680529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.680763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.680826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.681040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.681125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.681435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.681500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.681803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.681865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.682093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.682171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.682447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.682513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.682755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.682833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.683092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.683159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.683409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.683488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.683797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.684129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.684196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.684478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.138 [2024-07-25 14:26:29.684546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.138 qpair failed and we were unable to recover it. 00:25:00.138 [2024-07-25 14:26:29.684800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.684864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.685182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.685249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.685539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.685603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.685918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.685985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.686235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.686302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.686565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.686632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.686871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.686934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.687197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.687276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.687557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.687623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.687861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.687926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.688207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.688286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.688500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.688566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.688861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.688926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.689205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.689271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.689573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.689653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.689804] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:25:00.139 [2024-07-25 14:26:29.689884] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.139 [2024-07-25 14:26:29.689890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.689951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.690217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.690280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.690550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.690613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.690909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.690968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.691223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.691289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.691542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.691606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.691815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.691882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.692106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.692182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.692457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.692531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.692816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.139 [2024-07-25 14:26:29.692881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.139 qpair failed and we were unable to recover it. 00:25:00.139 [2024-07-25 14:26:29.693144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.693210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.693474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.693540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.693795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.693859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.694088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.694161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.694429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.694494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.694747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.694827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.695091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.695355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.695421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.695649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.695730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.696012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.696114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.696335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.696414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.696706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.696770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.697003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.697106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.697384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.697450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.697750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.697829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.698113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.698179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.698433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.698498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.698787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.698853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.699076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.699144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.699454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.699528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.699776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.699841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.700108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.700175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.700445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.700511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.700777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.700840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.701135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.701204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.701499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.701561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.701840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.701905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.702155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.702222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.702515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.702586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.702822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.702886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.703188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.703258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.703500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.703564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.140 qpair failed and we were unable to recover it. 00:25:00.140 [2024-07-25 14:26:29.703867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.140 [2024-07-25 14:26:29.703930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.704167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.704247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.704530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.704594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.704851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.704916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.705144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.705208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.705455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.705526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.705776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.705841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.706081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.706146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.706404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.706466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.706709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.706772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.707093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.707168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.707482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.707753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.707814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.708085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.708150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.708405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.708465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.708720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.708786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.709080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.709144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.709434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.709495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.709747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.709810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.710120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.710183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.710435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.710510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.710803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.710865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.711119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.711188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.711475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.711536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.711789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.711865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.712136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.712204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.712463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.712528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.712800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.712866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.713147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.713216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.713433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.713496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.713771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.713850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.714164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.141 [2024-07-25 14:26:29.714231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.141 qpair failed and we were unable to recover it. 00:25:00.141 [2024-07-25 14:26:29.714519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.714586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.714850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.714913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.715185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.715255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.715520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.715585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.715800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.715863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.716085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.716171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.716489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.716553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.716854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.716919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.717175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.717243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.717500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.717562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.717854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.717919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.718177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.718244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.718544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.718620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.718929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.719002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.719330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.719397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.719642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.719721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.720017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.720117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.720407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.720473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.720686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.720749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.720946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.721009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.721281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.721347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.721613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.721676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.721870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.721930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.722217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.722283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.722547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.722610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.722853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.722918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.723193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.723259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.142 [2024-07-25 14:26:29.723549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.142 [2024-07-25 14:26:29.723613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.142 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.723911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.723975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.143 [2024-07-25 14:26:29.724265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.724331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.724591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.724654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.724928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.724994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.725276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.725341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.725543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.725608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.725881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.725943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.726241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.726306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.726559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.726623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.726925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.726987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.727869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.727895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.728971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.728996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.729120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.729146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.729233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.729262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.729345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.143 [2024-07-25 14:26:29.729371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.143 qpair failed and we were unable to recover it. 00:25:00.143 [2024-07-25 14:26:29.729464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.729489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.729630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.729655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.729775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.729800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.729944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.729969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.730890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.730917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.731912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.731937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.732927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.732952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.733939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.733965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.144 qpair failed and we were unable to recover it. 00:25:00.144 [2024-07-25 14:26:29.734055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.144 [2024-07-25 14:26:29.734097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.734893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.734918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.735846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.735995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.736943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.736967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.737949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.737974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.738097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.738208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.738346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.738492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.145 [2024-07-25 14:26:29.738595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.145 qpair failed and we were unable to recover it. 00:25:00.145 [2024-07-25 14:26:29.738711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.738736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.738855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.738879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.739930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.739954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.740892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.740917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.741957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.741981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.742941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.742965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.743069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.743108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.146 [2024-07-25 14:26:29.743207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.146 [2024-07-25 14:26:29.743234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.146 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.743317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.743342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.743480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.743505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.743616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.743641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.743796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.743823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.743976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.744947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.744972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.745925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.745949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.746959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.746983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.747850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.747876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.147 qpair failed and we were unable to recover it. 00:25:00.147 [2024-07-25 14:26:29.748018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.147 [2024-07-25 14:26:29.748043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.748971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.748995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.749892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.749915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.750970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.750996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.751904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.751929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.148 qpair failed and we were unable to recover it. 00:25:00.148 [2024-07-25 14:26:29.752021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.148 [2024-07-25 14:26:29.752048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.149 qpair failed and we were unable to recover it. 00:25:00.149 [2024-07-25 14:26:29.752935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.149 [2024-07-25 14:26:29.752959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.431 qpair failed and we were unable to recover it. 00:25:00.431 [2024-07-25 14:26:29.753042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.431 [2024-07-25 14:26:29.753081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.431 qpair failed and we were unable to recover it. 00:25:00.431 [2024-07-25 14:26:29.753198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.431 [2024-07-25 14:26:29.753223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.431 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.753908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.753932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.754895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.754995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.755950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.755975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.756903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.756927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.757038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.757066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.757152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.757176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.757271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.757295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.432 qpair failed and we were unable to recover it. 00:25:00.432 [2024-07-25 14:26:29.757373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.432 [2024-07-25 14:26:29.757397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.757483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.757507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.757587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.757610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.757700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.757725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.757807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.757832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.757917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 [2024-07-25 14:26:29.758658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.758908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.758947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.759952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.759976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.760100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.760129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.760213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.760239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.760325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.760352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.760461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.760487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.433 [2024-07-25 14:26:29.760601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.433 [2024-07-25 14:26:29.760626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.433 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.760746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.760772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.760877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.760903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.761884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.761909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.762945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.762972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.763961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.763988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.764864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.764984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.765009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.434 [2024-07-25 14:26:29.765146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.434 [2024-07-25 14:26:29.765171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.434 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.765953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.765978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.766903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.766932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.767866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.767891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.768944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.768969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.769115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.769228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.769403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.769565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.435 [2024-07-25 14:26:29.769708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.435 qpair failed and we were unable to recover it. 00:25:00.435 [2024-07-25 14:26:29.769816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.769841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.769949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.769973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.770860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.770978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.771891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.771916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.772856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.772896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.773923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.773949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.774101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.774128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.774246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.774272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.436 qpair failed and we were unable to recover it. 00:25:00.436 [2024-07-25 14:26:29.774364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.436 [2024-07-25 14:26:29.774391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.774482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.774509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.774625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.774651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.774739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.774764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.774845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.774871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.774978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.775880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.775907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.776914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.776942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.437 [2024-07-25 14:26:29.777902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.437 [2024-07-25 14:26:29.777928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.437 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.778951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.778990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.779937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.779963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.780936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.780962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.781902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.781929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.438 [2024-07-25 14:26:29.782699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.438 qpair failed and we were unable to recover it. 00:25:00.438 [2024-07-25 14:26:29.782826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.782853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.782945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.782971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.783939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.783964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.784899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.784987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.785868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.785976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.786864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.786904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.787002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.787029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.787129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.787156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.787245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.787271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.439 qpair failed and we were unable to recover it. 00:25:00.439 [2024-07-25 14:26:29.787385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.439 [2024-07-25 14:26:29.787410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.787501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.787529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.787647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.787674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.787768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.787794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.787876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.787903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.788965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.788990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.789909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.789934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.790940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.790965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.440 [2024-07-25 14:26:29.791748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.440 [2024-07-25 14:26:29.791773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.440 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.791884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.791923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.792901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.792927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.793897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.793995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.794885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.794991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.795957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.795982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.796126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.796151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.441 qpair failed and we were unable to recover it. 00:25:00.441 [2024-07-25 14:26:29.796265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.441 [2024-07-25 14:26:29.796291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.796390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.796414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.796496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.796520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.796661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.796687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.796769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.796793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.796902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.796928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.797857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.797985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.798883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.798909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.799887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.799915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.800001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.442 [2024-07-25 14:26:29.800027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.442 qpair failed and we were unable to recover it. 00:25:00.442 [2024-07-25 14:26:29.800175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.800961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.800986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.801871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.801989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.802959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.802985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.803883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.803909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.804079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.804222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.804376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.804514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.443 [2024-07-25 14:26:29.804634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.443 qpair failed and we were unable to recover it. 00:25:00.443 [2024-07-25 14:26:29.804723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.804754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.804847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.804873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.804993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.805958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.805982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.806964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.806991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.807914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.807999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.808869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.808983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.809018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.809111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.809136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.809219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.809244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.444 [2024-07-25 14:26:29.809324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.444 [2024-07-25 14:26:29.809348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.444 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.809489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.809515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.809631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.809656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.809744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.809772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.809895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.809929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.810889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.810983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.811948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.811982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.812878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.812995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.813909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.813996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.814023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.814185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.445 [2024-07-25 14:26:29.814212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.445 qpair failed and we were unable to recover it. 00:25:00.445 [2024-07-25 14:26:29.814332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.814357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.814522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.814561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.814686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.814712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.814826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.814852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.814950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.814975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.815876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.815997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.816938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.817920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.817945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.446 [2024-07-25 14:26:29.818833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.446 [2024-07-25 14:26:29.818857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.446 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.819939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.819965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.820968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.820996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.821951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.821976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.822939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.447 [2024-07-25 14:26:29.822978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.447 qpair failed and we were unable to recover it. 00:25:00.447 [2024-07-25 14:26:29.823181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.823884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.823984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.824870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.824995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.825903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.825991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.826883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.826907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.448 qpair failed and we were unable to recover it. 00:25:00.448 [2024-07-25 14:26:29.827728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.448 [2024-07-25 14:26:29.827754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.827846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.827873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.827992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.828870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.828997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.829972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.829998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.830880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.830906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.831939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.832054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.832087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.449 [2024-07-25 14:26:29.832203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.449 [2024-07-25 14:26:29.832229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.449 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.832373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.832399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.832525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.832553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.832650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.832675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.832792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.832819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.832915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.832942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.833945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.833970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.834868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.834983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.835878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.835903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.450 [2024-07-25 14:26:29.836797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.450 qpair failed and we were unable to recover it. 00:25:00.450 [2024-07-25 14:26:29.836892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.836931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.837948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.837974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.838946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.838973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.839884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.839910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.840867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.840894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.841012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.841038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.841164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.841190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.841288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.841314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.451 [2024-07-25 14:26:29.841405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.451 [2024-07-25 14:26:29.841430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.451 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.841517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.841554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.841679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.841705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.841800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.841839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.841946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.841973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.842924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.842949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.843953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.843986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.844923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.844951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.845073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.845102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.845226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.845254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.452 [2024-07-25 14:26:29.845342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.452 [2024-07-25 14:26:29.845368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.452 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.845484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.845510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.845621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.845646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.845739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.845766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.845888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.845915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.846969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.846999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.847923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.847950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.848913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.848938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.453 [2024-07-25 14:26:29.849763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.453 [2024-07-25 14:26:29.849789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.453 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.849872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.849898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.850881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.850997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.851850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.851984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.852867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.852985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.853935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.853961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.854043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.854080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.854176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.854204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.454 qpair failed and we were unable to recover it. 00:25:00.454 [2024-07-25 14:26:29.854291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.454 [2024-07-25 14:26:29.854318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.854436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.854462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.854540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.854565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.854680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.854706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.854798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.854823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.854937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.854963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.855956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.855987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.856943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.856968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.857900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.857990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.858017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.858131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.858158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.858250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.858275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.455 qpair failed and we were unable to recover it. 00:25:00.455 [2024-07-25 14:26:29.858367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.455 [2024-07-25 14:26:29.858393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.858482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.858507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.858610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.858636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.858735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.858761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.858853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.858878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.859880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.859906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.860902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.860928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.861831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.861974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.862959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.862985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.863106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.863134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.863274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.863299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.863451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.863479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.863601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.456 [2024-07-25 14:26:29.863627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.456 qpair failed and we were unable to recover it. 00:25:00.456 [2024-07-25 14:26:29.863741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.863767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.863851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.863877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.864873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.864899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.865863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.865888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.866891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.866916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.457 [2024-07-25 14:26:29.867785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.457 qpair failed and we were unable to recover it. 00:25:00.457 [2024-07-25 14:26:29.867904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.867929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.868954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.868979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.869941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.869967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.870870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.870992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.871934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.871960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.872084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.872193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.872309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.872422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.458 [2024-07-25 14:26:29.872537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.458 qpair failed and we were unable to recover it. 00:25:00.458 [2024-07-25 14:26:29.872623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.872649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.872732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.872758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.872845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.872873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.872956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.872981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.873808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.459 [2024-07-25 14:26:29.873842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.459 [2024-07-25 14:26:29.873856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.459 [2024-07-25 14:26:29.873872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.459 [2024-07-25 14:26:29.873889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.459 [2024-07-25 14:26:29.873885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:00.459 [2024-07-25 14:26:29.874030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:00.459 [2024-07-25 14:26:29.874127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.873978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:00.459 [2024-07-25 14:26:29.874152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 [2024-07-25 14:26:29.874020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.874941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.874967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.875970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.875997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.876125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.876151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.876237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.876263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.876352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.876378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.459 [2024-07-25 14:26:29.876494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.459 [2024-07-25 14:26:29.876519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.459 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.876606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.876631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.876719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.876744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.876856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.876881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.876963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.876993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.877971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.877996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.878931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.878957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.879969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.879996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.880875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.880900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.881009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.881048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.460 [2024-07-25 14:26:29.881163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.460 [2024-07-25 14:26:29.881189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.460 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.881902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.881928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.882905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.882988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.883854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.883879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.884933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.884960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.461 [2024-07-25 14:26:29.885820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.461 [2024-07-25 14:26:29.885861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.461 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.885962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.886939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.886974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.887905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.887998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.888898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.888992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.889861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.889887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.462 qpair failed and we were unable to recover it. 00:25:00.462 [2024-07-25 14:26:29.890001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.462 [2024-07-25 14:26:29.890027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.890938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.890963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.891888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.891998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.892912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.892939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.893897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.893993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.463 [2024-07-25 14:26:29.894834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.463 qpair failed and we were unable to recover it. 00:25:00.463 [2024-07-25 14:26:29.894953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.894979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.895957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.895981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.896883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.896907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.897899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.897982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.898922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.898946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.464 [2024-07-25 14:26:29.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.464 [2024-07-25 14:26:29.899740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.464 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.899840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.899865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.899952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.899977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.900911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.900940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.901884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.901909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.902905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.902993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.903945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.903970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.465 [2024-07-25 14:26:29.904768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.465 qpair failed and we were unable to recover it. 00:25:00.465 [2024-07-25 14:26:29.904866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.904891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.904984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.905968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.905993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.906899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.906996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.907891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.907977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.908889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.908934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.909076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.909104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.909192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.909217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.466 [2024-07-25 14:26:29.909304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.466 [2024-07-25 14:26:29.909330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.466 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.909409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.909434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.909531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.909564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.909670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.909696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.909782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.909807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.909919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.909944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.910960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.910986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.911877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.911923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.912868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.912893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.913913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.913938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.914027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.914051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.914138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.914162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.914261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.467 [2024-07-25 14:26:29.914285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.467 qpair failed and we were unable to recover it. 00:25:00.467 [2024-07-25 14:26:29.914366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.914487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.914610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.914713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.914818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.914944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.914969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.915972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.915996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.916882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.916907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.917900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.917929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.918962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.918992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.468 [2024-07-25 14:26:29.919727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.468 qpair failed and we were unable to recover it. 00:25:00.468 [2024-07-25 14:26:29.919812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.919843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.919941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.919968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.920930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.920954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.921899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.921925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.922855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.922880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.923866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.923979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.924933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.924959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.469 [2024-07-25 14:26:29.925049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.469 [2024-07-25 14:26:29.925085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.469 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.925917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.925942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.926880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.926977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.927914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.927940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.928919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.928951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.470 [2024-07-25 14:26:29.929929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.470 [2024-07-25 14:26:29.929954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.470 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.930891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.930917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.931955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.931981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.932899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.932978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.933951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.933976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.934935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.934960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.471 [2024-07-25 14:26:29.935843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.471 qpair failed and we were unable to recover it. 00:25:00.471 [2024-07-25 14:26:29.935935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.935960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.936904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.936987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.937886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.937912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.938881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.938924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.939897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.939978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.940894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.940919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.941001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.941026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.941110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.941136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.472 [2024-07-25 14:26:29.941226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.472 [2024-07-25 14:26:29.941252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.472 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.941892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.941917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.942893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.942937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.943914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.943941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.944877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.944977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.945889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.945915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.473 [2024-07-25 14:26:29.946852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.473 qpair failed and we were unable to recover it. 00:25:00.473 [2024-07-25 14:26:29.946935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.946960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.947901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.947997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.948936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.948967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.949903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.949990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.950906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.950991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.474 [2024-07-25 14:26:29.951899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.474 qpair failed and we were unable to recover it. 00:25:00.474 [2024-07-25 14:26:29.951997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.952915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.952940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.953895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.953920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.954970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.954997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.955968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.955993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.956906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.956931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 A controller has encountered a failure and is being reset. 00:25:00.475 [2024-07-25 14:26:29.957667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f44000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.957959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.957988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.958077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.475 [2024-07-25 14:26:29.958111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.475 qpair failed and we were unable to recover it. 00:25:00.475 [2024-07-25 14:26:29.958209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.958939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.958969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.959940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.959965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f4c000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x221c250 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9f54000b90 with addr=10.0.0.2, port=4420 00:25:00.476 qpair failed and we were unable to recover it. 00:25:00.476 [2024-07-25 14:26:29.960779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.476 [2024-07-25 14:26:29.960818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x222a230 with addr=10.0.0.2, port=4420 00:25:00.476 [2024-07-25 14:26:29.960837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x222a230 is same with the state(5) to be set 00:25:00.476 [2024-07-25 14:26:29.960865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x222a230 (9): Bad file descriptor 00:25:00.476 [2024-07-25 14:26:29.960885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.476 [2024-07-25 14:26:29.960898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.476 [2024-07-25 14:26:29.960920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.476 Unable to reset the controller. 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.476 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 Malloc0 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 [2024-07-25 14:26:30.071909] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 [2024-07-25 14:26:30.100202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:00.735 14:26:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1020442 00:25:01.668 Controller properly reset. 00:25:06.940 Initializing NVMe Controllers 00:25:06.940 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:06.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:06.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:06.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:06.940 Initialization complete. Launching workers. 00:25:06.940 Starting thread on core 1 00:25:06.940 Starting thread on core 2 00:25:06.940 Starting thread on core 3 00:25:06.940 Starting thread on core 0 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:06.940 00:25:06.940 real 0m10.731s 00:25:06.940 user 0m33.491s 00:25:06.940 sys 0m7.503s 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:06.940 ************************************ 00:25:06.940 END TEST nvmf_target_disconnect_tc2 00:25:06.940 ************************************ 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:06.940 rmmod nvme_tcp 00:25:06.940 rmmod nvme_fabrics 00:25:06.940 rmmod nvme_keyring 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1020852 ']' 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1020852 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1020852 ']' 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1020852 00:25:06.940 14:26:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1020852 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1020852' 00:25:06.940 killing process with pid 1020852 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1020852 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1020852 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.940 14:26:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:08.848 00:25:08.848 real 0m15.630s 00:25:08.848 user 0m58.928s 00:25:08.848 sys 0m10.032s 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:08.848 ************************************ 00:25:08.848 END TEST nvmf_target_disconnect 00:25:08.848 ************************************ 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:08.848 00:25:08.848 real 4m57.093s 00:25:08.848 user 10m46.200s 00:25:08.848 sys 1m14.563s 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.848 14:26:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.848 ************************************ 00:25:08.848 END TEST nvmf_host 00:25:08.848 ************************************ 00:25:08.848 14:26:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:08.848 00:25:08.848 real 19m9.928s 00:25:08.848 user 45m22.033s 00:25:08.848 sys 4m54.779s 00:25:08.848 14:26:38 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:08.848 14:26:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:08.848 ************************************ 00:25:08.848 END TEST nvmf_tcp 00:25:08.848 ************************************ 00:25:08.848 14:26:38 -- common/autotest_common.sh@1142 -- # return 0 00:25:08.848 14:26:38 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:08.848 14:26:38 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.848 14:26:38 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:08.848 14:26:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:08.848 14:26:38 -- common/autotest_common.sh@10 -- # set +x 00:25:08.848 ************************************ 00:25:08.848 START TEST spdkcli_nvmf_tcp 00:25:08.848 ************************************ 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:08.848 * Looking for test storage... 00:25:08.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.848 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1022046 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1022046 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1022046 ']' 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:09.108 14:26:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:09.108 [2024-07-25 14:26:38.544693] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:25:09.108 [2024-07-25 14:26:38.544786] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022046 ] 00:25:09.108 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.108 [2024-07-25 14:26:38.610856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:09.108 [2024-07-25 14:26:38.742150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.108 [2024-07-25 14:26:38.742155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.044 14:26:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:10.044 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:10.044 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:10.044 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:10.044 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:10.044 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:10.044 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:10.044 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.044 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.044 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:10.044 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:10.044 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:10.044 ' 00:25:12.584 [2024-07-25 14:26:42.034718] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:13.963 [2024-07-25 14:26:43.255122] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:15.881 [2024-07-25 14:26:45.513977] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:18.414 [2024-07-25 14:26:47.447775] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:19.351 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:19.351 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:19.351 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.351 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.351 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:19.351 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:19.351 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:19.610 14:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:19.868 14:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.126 14:26:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:20.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:20.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:20.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:20.126 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:20.126 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:20.126 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:20.126 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:20.126 ' 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:25.395 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:25.395 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:25.395 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:25.395 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1022046 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1022046 ']' 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1022046 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1022046 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1022046' 00:25:25.395 killing process with pid 1022046 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1022046 00:25:25.395 14:26:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1022046 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1022046 ']' 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1022046 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1022046 ']' 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1022046 00:25:25.654 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1022046) - No such process 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1022046 is not found' 00:25:25.654 Process with pid 1022046 is not found 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:25.654 00:25:25.654 real 0m16.653s 00:25:25.654 user 0m35.201s 00:25:25.654 sys 0m0.851s 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:25.654 14:26:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.654 ************************************ 00:25:25.654 END TEST spdkcli_nvmf_tcp 00:25:25.654 ************************************ 00:25:25.654 14:26:55 -- common/autotest_common.sh@1142 -- # return 0 00:25:25.654 14:26:55 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:25.654 14:26:55 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:25.654 14:26:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:25.654 14:26:55 -- common/autotest_common.sh@10 -- # set +x 00:25:25.654 ************************************ 00:25:25.654 START TEST nvmf_identify_passthru 00:25:25.654 ************************************ 00:25:25.654 14:26:55 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:25.654 * Looking for test storage... 00:25:25.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:25.654 14:26:55 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:25.654 14:26:55 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:25.654 14:26:55 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:25.654 14:26:55 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:25.654 14:26:55 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.654 14:26:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:25.654 14:26:55 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:25.654 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:25.655 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:25.655 14:26:55 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:25.655 14:26:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.189 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:25:28.190 00:25:28.190 --- 10.0.0.2 ping statistics --- 00:25:28.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.190 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:25:28.190 00:25:28.190 --- 10.0.0.1 ping statistics --- 00:25:28.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.190 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.190 14:26:57 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.190 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.190 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:28.190 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:28.191 14:26:57 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:28.191 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:28.191 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:28.191 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:28.191 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:28.191 14:26:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:28.191 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.377 14:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:32.377 14:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:32.377 14:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:32.377 14:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:32.377 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1026803 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.568 14:27:05 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1026803 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1026803 ']' 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:36.568 14:27:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.568 [2024-07-25 14:27:05.922165] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:25:36.568 [2024-07-25 14:27:05.922253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.568 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.568 [2024-07-25 14:27:05.986903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.568 [2024-07-25 14:27:06.094535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.568 [2024-07-25 14:27:06.094610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.568 [2024-07-25 14:27:06.094633] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.568 [2024-07-25 14:27:06.094644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.568 [2024-07-25 14:27:06.094654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.568 [2024-07-25 14:27:06.094734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.568 [2024-07-25 14:27:06.094800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.568 [2024-07-25 14:27:06.094866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.568 [2024-07-25 14:27:06.094869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:36.568 14:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.568 INFO: Log level set to 20 00:25:36.568 INFO: Requests: 00:25:36.568 { 00:25:36.568 "jsonrpc": "2.0", 00:25:36.568 "method": "nvmf_set_config", 00:25:36.568 "id": 1, 00:25:36.568 "params": { 00:25:36.568 "admin_cmd_passthru": { 00:25:36.568 "identify_ctrlr": true 00:25:36.568 } 00:25:36.568 } 00:25:36.568 } 00:25:36.568 00:25:36.568 INFO: response: 00:25:36.568 { 00:25:36.568 "jsonrpc": "2.0", 00:25:36.568 "id": 1, 00:25:36.568 "result": true 00:25:36.568 } 00:25:36.568 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.568 14:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.568 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.568 INFO: Setting log level to 20 00:25:36.568 INFO: Setting log level to 20 00:25:36.568 INFO: Log level set to 20 00:25:36.568 INFO: Log level set to 20 00:25:36.568 INFO: Requests: 00:25:36.568 { 00:25:36.568 "jsonrpc": "2.0", 00:25:36.568 "method": "framework_start_init", 00:25:36.568 "id": 1 00:25:36.568 } 00:25:36.568 00:25:36.568 INFO: Requests: 00:25:36.568 { 00:25:36.568 "jsonrpc": "2.0", 00:25:36.568 "method": "framework_start_init", 00:25:36.568 "id": 1 00:25:36.568 } 00:25:36.568 00:25:36.828 [2024-07-25 14:27:06.242333] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:36.828 INFO: response: 00:25:36.828 { 00:25:36.828 "jsonrpc": "2.0", 00:25:36.828 "id": 1, 00:25:36.828 "result": true 00:25:36.828 } 00:25:36.828 00:25:36.828 INFO: response: 00:25:36.828 { 00:25:36.828 "jsonrpc": "2.0", 00:25:36.828 "id": 1, 00:25:36.828 "result": true 00:25:36.828 } 00:25:36.828 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.828 14:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.828 INFO: Setting log level to 40 00:25:36.828 INFO: Setting log level to 40 00:25:36.828 INFO: Setting log level to 40 00:25:36.828 [2024-07-25 14:27:06.252427] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.828 14:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.828 14:27:06 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.828 14:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 Nvme0n1 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.118 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.118 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.118 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 [2024-07-25 14:27:09.143501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.118 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.118 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.118 [ 00:25:40.118 { 00:25:40.118 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:40.118 "subtype": "Discovery", 00:25:40.118 "listen_addresses": [], 00:25:40.119 "allow_any_host": true, 00:25:40.119 "hosts": [] 00:25:40.119 }, 00:25:40.119 { 00:25:40.119 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.119 "subtype": "NVMe", 00:25:40.119 "listen_addresses": [ 00:25:40.119 { 00:25:40.119 "trtype": "TCP", 00:25:40.119 "adrfam": "IPv4", 00:25:40.119 "traddr": "10.0.0.2", 00:25:40.119 "trsvcid": "4420" 00:25:40.119 } 00:25:40.119 ], 00:25:40.119 "allow_any_host": true, 00:25:40.119 "hosts": [], 00:25:40.119 "serial_number": "SPDK00000000000001", 00:25:40.119 "model_number": "SPDK bdev Controller", 00:25:40.119 "max_namespaces": 1, 00:25:40.119 "min_cntlid": 1, 00:25:40.119 "max_cntlid": 65519, 00:25:40.119 "namespaces": [ 00:25:40.119 { 00:25:40.119 "nsid": 1, 00:25:40.119 "bdev_name": "Nvme0n1", 00:25:40.119 "name": "Nvme0n1", 00:25:40.119 "nguid": "6A9884616424405BBB9BF18C5B7ECBC1", 00:25:40.119 "uuid": "6a988461-6424-405b-bb9b-f18c5b7ecbc1" 00:25:40.119 } 00:25:40.119 ] 00:25:40.119 } 00:25:40.119 ] 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:40.119 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:40.119 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:40.119 14:27:09 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.119 rmmod nvme_tcp 00:25:40.119 rmmod nvme_fabrics 00:25:40.119 rmmod nvme_keyring 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1026803 ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1026803 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1026803 ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1026803 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1026803 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1026803' 00:25:40.119 killing process with pid 1026803 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1026803 00:25:40.119 14:27:09 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1026803 00:25:41.499 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.500 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.500 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.500 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.500 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.500 14:27:11 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.500 14:27:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:41.500 14:27:11 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.035 14:27:13 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.035 00:25:44.035 real 0m18.052s 00:25:44.035 user 0m26.545s 00:25:44.035 sys 0m2.326s 00:25:44.035 14:27:13 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.035 14:27:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 ************************************ 00:25:44.035 END TEST nvmf_identify_passthru 00:25:44.035 ************************************ 00:25:44.035 14:27:13 -- common/autotest_common.sh@1142 -- # return 0 00:25:44.035 14:27:13 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:44.035 14:27:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:44.035 14:27:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.035 14:27:13 -- common/autotest_common.sh@10 -- # set +x 00:25:44.035 ************************************ 00:25:44.035 START TEST nvmf_dif 00:25:44.035 ************************************ 00:25:44.035 14:27:13 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:44.035 * Looking for test storage... 00:25:44.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.035 14:27:13 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.035 14:27:13 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.035 14:27:13 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.035 14:27:13 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.035 14:27:13 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.035 14:27:13 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.035 14:27:13 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:44.035 14:27:13 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:44.035 14:27:13 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.035 14:27:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:44.035 14:27:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.035 14:27:13 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.035 14:27:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:45.940 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:45.940 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:45.940 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.940 14:27:15 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:45.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:25:45.941 00:25:45.941 --- 10.0.0.2 ping statistics --- 00:25:45.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.941 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:45.941 00:25:45.941 --- 10.0.0.1 ping statistics --- 00:25:45.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.941 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:45.941 14:27:15 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:46.914 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:46.914 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:46.914 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:46.914 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:46.914 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:46.914 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:46.914 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:46.914 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:46.914 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:46.914 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:46.914 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:46.914 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:46.914 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:46.914 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:46.914 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:46.914 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:46.914 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.174 14:27:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:47.174 14:27:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1030508 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:47.174 14:27:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1030508 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1030508 ']' 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.174 14:27:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.174 [2024-07-25 14:27:16.692556] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:25:47.174 [2024-07-25 14:27:16.692643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.174 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.174 [2024-07-25 14:27:16.757285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.433 [2024-07-25 14:27:16.867890] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.433 [2024-07-25 14:27:16.867955] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.433 [2024-07-25 14:27:16.867968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.433 [2024-07-25 14:27:16.867987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.433 [2024-07-25 14:27:16.867997] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.433 [2024-07-25 14:27:16.868026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.433 14:27:16 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.433 14:27:16 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:47.433 14:27:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.433 14:27:16 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.433 14:27:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 14:27:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.434 14:27:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:47.434 14:27:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 [2024-07-25 14:27:17.013222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.434 14:27:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 ************************************ 00:25:47.434 START TEST fio_dif_1_default 00:25:47.434 ************************************ 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 bdev_null0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.434 [2024-07-25 14:27:17.073566] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:47.434 { 00:25:47.434 "params": { 00:25:47.434 "name": "Nvme$subsystem", 00:25:47.434 "trtype": "$TEST_TRANSPORT", 00:25:47.434 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.434 "adrfam": "ipv4", 00:25:47.434 "trsvcid": "$NVMF_PORT", 00:25:47.434 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.434 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.434 "hdgst": ${hdgst:-false}, 00:25:47.434 "ddgst": ${ddgst:-false} 00:25:47.434 }, 00:25:47.434 "method": "bdev_nvme_attach_controller" 00:25:47.434 } 00:25:47.434 EOF 00:25:47.434 )") 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:47.434 14:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:47.434 "params": { 00:25:47.434 "name": "Nvme0", 00:25:47.434 "trtype": "tcp", 00:25:47.434 "traddr": "10.0.0.2", 00:25:47.434 "adrfam": "ipv4", 00:25:47.434 "trsvcid": "4420", 00:25:47.434 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.434 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.434 "hdgst": false, 00:25:47.434 "ddgst": false 00:25:47.434 }, 00:25:47.434 "method": "bdev_nvme_attach_controller" 00:25:47.434 }' 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:47.693 14:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.693 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:47.693 fio-3.35 00:25:47.693 Starting 1 thread 00:25:47.951 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.144 00:26:00.144 filename0: (groupid=0, jobs=1): err= 0: pid=1030785: Thu Jul 25 14:27:27 2024 00:26:00.144 read: IOPS=189, BW=756KiB/s (775kB/s)(7584KiB/10027msec) 00:26:00.144 slat (nsec): min=5322, max=87169, avg=8991.47, stdev=3891.16 00:26:00.144 clat (usec): min=559, max=45720, avg=21125.78, stdev=20391.07 00:26:00.144 lat (usec): min=566, max=45754, avg=21134.77, stdev=20390.89 00:26:00.144 clat percentiles (usec): 00:26:00.144 | 1.00th=[ 578], 5.00th=[ 594], 10.00th=[ 611], 20.00th=[ 652], 00:26:00.144 | 30.00th=[ 668], 40.00th=[ 693], 50.00th=[41157], 60.00th=[41157], 00:26:00.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:26:00.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:26:00.144 | 99.99th=[45876] 00:26:00.144 bw ( KiB/s): min= 672, max= 768, per=99.95%, avg=756.80, stdev=26.01, samples=20 00:26:00.144 iops : min= 168, max= 192, avg=189.20, stdev= 6.50, samples=20 00:26:00.144 lat (usec) : 750=49.05%, 1000=0.74% 00:26:00.144 lat (msec) : 50=50.21% 00:26:00.144 cpu : usr=89.56%, sys=10.16%, ctx=23, majf=0, minf=264 00:26:00.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.145 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.145 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.145 00:26:00.145 Run status group 0 (all jobs): 00:26:00.145 READ: bw=756KiB/s (775kB/s), 756KiB/s-756KiB/s (775kB/s-775kB/s), io=7584KiB (7766kB), run=10027-10027msec 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 00:26:00.145 real 0m11.264s 00:26:00.145 user 0m10.227s 00:26:00.145 sys 0m1.290s 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 ************************************ 00:26:00.145 END TEST fio_dif_1_default 00:26:00.145 ************************************ 00:26:00.145 14:27:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:00.145 14:27:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:00.145 14:27:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:00.145 14:27:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 ************************************ 00:26:00.145 START TEST fio_dif_1_multi_subsystems 00:26:00.145 ************************************ 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 bdev_null0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 [2024-07-25 14:27:28.388315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 bdev_null1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.145 { 00:26:00.145 "params": { 00:26:00.145 "name": "Nvme$subsystem", 00:26:00.145 "trtype": "$TEST_TRANSPORT", 00:26:00.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.145 "adrfam": "ipv4", 00:26:00.145 "trsvcid": "$NVMF_PORT", 00:26:00.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.145 "hdgst": ${hdgst:-false}, 00:26:00.145 "ddgst": ${ddgst:-false} 00:26:00.145 }, 00:26:00.145 "method": "bdev_nvme_attach_controller" 00:26:00.145 } 00:26:00.145 EOF 00:26:00.145 )") 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.145 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.146 { 00:26:00.146 "params": { 00:26:00.146 "name": "Nvme$subsystem", 00:26:00.146 "trtype": "$TEST_TRANSPORT", 00:26:00.146 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.146 "adrfam": "ipv4", 00:26:00.146 "trsvcid": "$NVMF_PORT", 00:26:00.146 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.146 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.146 "hdgst": ${hdgst:-false}, 00:26:00.146 "ddgst": ${ddgst:-false} 00:26:00.146 }, 00:26:00.146 "method": "bdev_nvme_attach_controller" 00:26:00.146 } 00:26:00.146 EOF 00:26:00.146 )") 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.146 "params": { 00:26:00.146 "name": "Nvme0", 00:26:00.146 "trtype": "tcp", 00:26:00.146 "traddr": "10.0.0.2", 00:26:00.146 "adrfam": "ipv4", 00:26:00.146 "trsvcid": "4420", 00:26:00.146 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.146 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.146 "hdgst": false, 00:26:00.146 "ddgst": false 00:26:00.146 }, 00:26:00.146 "method": "bdev_nvme_attach_controller" 00:26:00.146 },{ 00:26:00.146 "params": { 00:26:00.146 "name": "Nvme1", 00:26:00.146 "trtype": "tcp", 00:26:00.146 "traddr": "10.0.0.2", 00:26:00.146 "adrfam": "ipv4", 00:26:00.146 "trsvcid": "4420", 00:26:00.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.146 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.146 "hdgst": false, 00:26:00.146 "ddgst": false 00:26:00.146 }, 00:26:00.146 "method": "bdev_nvme_attach_controller" 00:26:00.146 }' 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:00.146 14:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.146 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.146 fio-3.35 00:26:00.146 Starting 2 threads 00:26:00.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.132 00:26:10.132 filename0: (groupid=0, jobs=1): err= 0: pid=1032233: Thu Jul 25 14:27:39 2024 00:26:10.132 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:26:10.132 slat (nsec): min=4804, max=29427, avg=9692.07, stdev=2536.43 00:26:10.133 clat (usec): min=40862, max=48461, avg=41000.41, stdev=479.55 00:26:10.133 lat (usec): min=40870, max=48473, avg=41010.11, stdev=479.40 00:26:10.133 clat percentiles (usec): 00:26:10.133 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:10.133 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:10.133 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:10.133 | 99.00th=[41157], 99.50th=[41157], 99.90th=[48497], 99.95th=[48497], 00:26:10.133 | 99.99th=[48497] 00:26:10.133 bw ( KiB/s): min= 384, max= 416, per=50.09%, avg=388.80, stdev=11.72, samples=20 00:26:10.133 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:10.133 lat (msec) : 50=100.00% 00:26:10.133 cpu : usr=94.87%, sys=4.84%, ctx=18, majf=0, minf=81 00:26:10.133 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.133 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.133 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.133 filename1: (groupid=0, jobs=1): err= 0: pid=1032234: Thu Jul 25 14:27:39 2024 00:26:10.133 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10018msec) 00:26:10.133 slat (nsec): min=4409, max=32604, avg=9509.87, stdev=2458.53 00:26:10.133 clat (usec): min=40836, max=48532, avg=41537.20, stdev=672.86 00:26:10.133 lat (usec): min=40844, max=48546, avg=41546.71, stdev=672.82 00:26:10.133 clat percentiles (usec): 00:26:10.133 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:10.133 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:26:10.133 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:10.133 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:26:10.133 | 99.99th=[48497] 00:26:10.133 bw ( KiB/s): min= 352, max= 416, per=49.57%, avg=384.00, stdev=10.38, samples=20 00:26:10.133 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:26:10.133 lat (msec) : 50=100.00% 00:26:10.133 cpu : usr=94.37%, sys=5.35%, ctx=28, majf=0, minf=193 00:26:10.133 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.133 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.133 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.133 00:26:10.133 Run status group 0 (all jobs): 00:26:10.133 READ: bw=775KiB/s (793kB/s), 385KiB/s-390KiB/s (394kB/s-399kB/s), io=7760KiB (7946kB), run=10012-10018msec 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 00:26:10.133 real 0m11.323s 00:26:10.133 user 0m20.399s 00:26:10.133 sys 0m1.284s 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 ************************************ 00:26:10.133 END TEST fio_dif_1_multi_subsystems 00:26:10.133 ************************************ 00:26:10.133 14:27:39 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:10.133 14:27:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:10.133 14:27:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:10.133 14:27:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 ************************************ 00:26:10.133 START TEST fio_dif_rand_params 00:26:10.133 ************************************ 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 bdev_null0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.133 [2024-07-25 14:27:39.765014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.133 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.133 { 00:26:10.133 "params": { 00:26:10.133 "name": "Nvme$subsystem", 00:26:10.133 "trtype": "$TEST_TRANSPORT", 00:26:10.133 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.133 "adrfam": "ipv4", 00:26:10.133 "trsvcid": "$NVMF_PORT", 00:26:10.133 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.133 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.134 "hdgst": ${hdgst:-false}, 00:26:10.134 "ddgst": ${ddgst:-false} 00:26:10.134 }, 00:26:10.134 "method": "bdev_nvme_attach_controller" 00:26:10.134 } 00:26:10.134 EOF 00:26:10.134 )") 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:10.134 14:27:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:10.134 "params": { 00:26:10.134 "name": "Nvme0", 00:26:10.134 "trtype": "tcp", 00:26:10.134 "traddr": "10.0.0.2", 00:26:10.134 "adrfam": "ipv4", 00:26:10.134 "trsvcid": "4420", 00:26:10.134 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.134 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.134 "hdgst": false, 00:26:10.134 "ddgst": false 00:26:10.134 }, 00:26:10.134 "method": "bdev_nvme_attach_controller" 00:26:10.134 }' 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:10.392 14:27:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.392 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:10.392 ... 00:26:10.392 fio-3.35 00:26:10.392 Starting 3 threads 00:26:10.651 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.209 00:26:17.209 filename0: (groupid=0, jobs=1): err= 0: pid=1033599: Thu Jul 25 14:27:45 2024 00:26:17.209 read: IOPS=220, BW=27.5MiB/s (28.8MB/s)(139MiB/5048msec) 00:26:17.209 slat (nsec): min=4543, max=75426, avg=18659.90, stdev=5815.06 00:26:17.209 clat (usec): min=4887, max=54794, avg=13569.67, stdev=5867.45 00:26:17.209 lat (usec): min=4896, max=54803, avg=13588.33, stdev=5867.14 00:26:17.209 clat percentiles (usec): 00:26:17.209 | 1.00th=[ 7767], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11338], 00:26:17.209 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:26:17.209 | 70.00th=[13960], 80.00th=[14615], 90.00th=[15401], 95.00th=[16057], 00:26:17.209 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53740], 99.95th=[54789], 00:26:17.209 | 99.99th=[54789] 00:26:17.209 bw ( KiB/s): min=17664, max=32000, per=32.38%, avg=28364.80, stdev=4038.35, samples=10 00:26:17.209 iops : min= 138, max= 250, avg=221.60, stdev=31.55, samples=10 00:26:17.209 lat (msec) : 10=7.02%, 20=90.91%, 50=0.36%, 100=1.71% 00:26:17.209 cpu : usr=91.52%, sys=6.58%, ctx=168, majf=0, minf=109 00:26:17.209 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.209 issued rwts: total=1111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.209 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.209 filename0: (groupid=0, jobs=1): err= 0: pid=1033600: Thu Jul 25 14:27:45 2024 00:26:17.209 read: IOPS=236, BW=29.5MiB/s (30.9MB/s)(148MiB/5007msec) 00:26:17.209 slat (nsec): min=4580, max=84025, avg=17824.62, stdev=5201.07 00:26:17.209 clat (usec): min=4866, max=46195, avg=12681.52, stdev=2663.47 00:26:17.209 lat (usec): min=4879, max=46208, avg=12699.34, stdev=2664.12 00:26:17.209 clat percentiles (usec): 00:26:17.209 | 1.00th=[ 6980], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[11076], 00:26:17.209 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12780], 60.00th=[13173], 00:26:17.210 | 70.00th=[13698], 80.00th=[14353], 90.00th=[15139], 95.00th=[15795], 00:26:17.210 | 99.00th=[16909], 99.50th=[17433], 99.90th=[45876], 99.95th=[46400], 00:26:17.210 | 99.99th=[46400] 00:26:17.210 bw ( KiB/s): min=28416, max=33346, per=34.49%, avg=30214.60, stdev=1433.62, samples=10 00:26:17.210 iops : min= 222, max= 260, avg=236.00, stdev=11.08, samples=10 00:26:17.210 lat (msec) : 10=11.25%, 20=88.49%, 50=0.25% 00:26:17.210 cpu : usr=93.73%, sys=5.75%, ctx=14, majf=0, minf=175 00:26:17.210 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.210 issued rwts: total=1182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.210 filename0: (groupid=0, jobs=1): err= 0: pid=1033601: Thu Jul 25 14:27:45 2024 00:26:17.210 read: IOPS=230, BW=28.8MiB/s (30.2MB/s)(145MiB/5048msec) 00:26:17.210 slat (nsec): min=4167, max=73391, avg=17026.35, stdev=4888.59 00:26:17.210 clat (usec): min=3714, max=55637, avg=12974.89, stdev=5374.62 00:26:17.210 lat (usec): min=3726, max=55660, avg=12991.92, stdev=5374.96 00:26:17.210 clat percentiles (usec): 00:26:17.210 | 1.00th=[ 7242], 5.00th=[ 8717], 10.00th=[10028], 20.00th=[11207], 00:26:17.210 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:26:17.210 | 70.00th=[13304], 80.00th=[13829], 90.00th=[14615], 95.00th=[15270], 00:26:17.210 | 99.00th=[51643], 99.50th=[52691], 99.90th=[54789], 99.95th=[55837], 00:26:17.210 | 99.99th=[55837] 00:26:17.210 bw ( KiB/s): min=26112, max=32833, per=33.87%, avg=29676.90, stdev=1844.27, samples=10 00:26:17.210 iops : min= 204, max= 256, avg=231.80, stdev=14.31, samples=10 00:26:17.210 lat (msec) : 4=0.09%, 10=9.72%, 20=88.47%, 50=0.69%, 100=1.03% 00:26:17.210 cpu : usr=94.06%, sys=5.39%, ctx=10, majf=0, minf=153 00:26:17.210 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.210 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.210 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.210 issued rwts: total=1162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.210 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.210 00:26:17.210 Run status group 0 (all jobs): 00:26:17.210 READ: bw=85.6MiB/s (89.7MB/s), 27.5MiB/s-29.5MiB/s (28.8MB/s-30.9MB/s), io=432MiB (453MB), run=5007-5048msec 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 bdev_null0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 [2024-07-25 14:27:45.843565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 bdev_null1 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 bdev_null2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.210 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.211 { 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme$subsystem", 00:26:17.211 "trtype": "$TEST_TRANSPORT", 00:26:17.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "$NVMF_PORT", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.211 "hdgst": ${hdgst:-false}, 00:26:17.211 "ddgst": ${ddgst:-false} 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 } 00:26:17.211 EOF 00:26:17.211 )") 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.211 { 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme$subsystem", 00:26:17.211 "trtype": "$TEST_TRANSPORT", 00:26:17.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "$NVMF_PORT", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.211 "hdgst": ${hdgst:-false}, 00:26:17.211 "ddgst": ${ddgst:-false} 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 } 00:26:17.211 EOF 00:26:17.211 )") 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.211 { 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme$subsystem", 00:26:17.211 "trtype": "$TEST_TRANSPORT", 00:26:17.211 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "$NVMF_PORT", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.211 "hdgst": ${hdgst:-false}, 00:26:17.211 "ddgst": ${ddgst:-false} 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 } 00:26:17.211 EOF 00:26:17.211 )") 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme0", 00:26:17.211 "trtype": "tcp", 00:26:17.211 "traddr": "10.0.0.2", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "4420", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.211 "hdgst": false, 00:26:17.211 "ddgst": false 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 },{ 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme1", 00:26:17.211 "trtype": "tcp", 00:26:17.211 "traddr": "10.0.0.2", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "4420", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.211 "hdgst": false, 00:26:17.211 "ddgst": false 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 },{ 00:26:17.211 "params": { 00:26:17.211 "name": "Nvme2", 00:26:17.211 "trtype": "tcp", 00:26:17.211 "traddr": "10.0.0.2", 00:26:17.211 "adrfam": "ipv4", 00:26:17.211 "trsvcid": "4420", 00:26:17.211 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:17.211 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:17.211 "hdgst": false, 00:26:17.211 "ddgst": false 00:26:17.211 }, 00:26:17.211 "method": "bdev_nvme_attach_controller" 00:26:17.211 }' 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.211 14:27:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.211 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.211 ... 00:26:17.211 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.211 ... 00:26:17.211 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.211 ... 00:26:17.211 fio-3.35 00:26:17.211 Starting 24 threads 00:26:17.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.426 00:26:29.426 filename0: (groupid=0, jobs=1): err= 0: pid=1034374: Thu Jul 25 14:27:57 2024 00:26:29.426 read: IOPS=67, BW=269KiB/s (275kB/s)(2688KiB/10011msec) 00:26:29.426 slat (usec): min=4, max=106, avg=55.72, stdev=21.52 00:26:29.426 clat (msec): min=5, max=388, avg=237.86, stdev=75.91 00:26:29.426 lat (msec): min=6, max=388, avg=237.91, stdev=75.92 00:26:29.426 clat percentiles (msec): 00:26:29.426 | 1.00th=[ 6], 5.00th=[ 47], 10.00th=[ 148], 20.00th=[ 190], 00:26:29.426 | 30.00th=[ 230], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:26:29.426 | 70.00th=[ 271], 80.00th=[ 284], 90.00th=[ 321], 95.00th=[ 338], 00:26:29.426 | 99.00th=[ 380], 99.50th=[ 384], 99.90th=[ 388], 99.95th=[ 388], 00:26:29.426 | 99.99th=[ 388] 00:26:29.426 bw ( KiB/s): min= 128, max= 640, per=4.05%, avg=262.40, stdev=105.80, samples=20 00:26:29.426 iops : min= 32, max= 160, avg=65.60, stdev=26.45, samples=20 00:26:29.426 lat (msec) : 10=2.38%, 50=4.76%, 250=38.39%, 500=54.46% 00:26:29.426 cpu : usr=97.59%, sys=1.67%, ctx=67, majf=0, minf=44 00:26:29.426 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:29.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.426 filename0: (groupid=0, jobs=1): err= 0: pid=1034375: Thu Jul 25 14:27:57 2024 00:26:29.426 read: IOPS=65, BW=262KiB/s (268kB/s)(2624KiB/10015msec) 00:26:29.426 slat (nsec): min=8445, max=91962, avg=28612.50, stdev=11654.98 00:26:29.426 clat (msec): min=15, max=438, avg=244.00, stdev=56.28 00:26:29.426 lat (msec): min=15, max=438, avg=244.03, stdev=56.28 00:26:29.426 clat percentiles (msec): 00:26:29.426 | 1.00th=[ 16], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 197], 00:26:29.426 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 253], 00:26:29.426 | 70.00th=[ 262], 80.00th=[ 279], 90.00th=[ 313], 95.00th=[ 313], 00:26:29.426 | 99.00th=[ 351], 99.50th=[ 376], 99.90th=[ 439], 99.95th=[ 439], 00:26:29.426 | 99.99th=[ 439] 00:26:29.426 bw ( KiB/s): min= 128, max= 384, per=3.95%, avg=256.00, stdev=43.33, samples=19 00:26:29.426 iops : min= 32, max= 96, avg=64.00, stdev=10.83, samples=19 00:26:29.426 lat (msec) : 20=2.44%, 250=44.51%, 500=53.05% 00:26:29.426 cpu : usr=98.51%, sys=1.09%, ctx=19, majf=0, minf=34 00:26:29.426 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:29.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.426 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.426 filename0: (groupid=0, jobs=1): err= 0: pid=1034376: Thu Jul 25 14:27:57 2024 00:26:29.426 read: IOPS=72, BW=290KiB/s (297kB/s)(2904KiB/10015msec) 00:26:29.426 slat (nsec): min=7965, max=57884, avg=23286.78, stdev=12882.92 00:26:29.426 clat (msec): min=88, max=350, avg=220.41, stdev=40.19 00:26:29.426 lat (msec): min=88, max=350, avg=220.44, stdev=40.19 00:26:29.426 clat percentiles (msec): 00:26:29.426 | 1.00th=[ 89], 5.00th=[ 134], 10.00th=[ 184], 20.00th=[ 192], 00:26:29.426 | 30.00th=[ 201], 40.00th=[ 209], 50.00th=[ 224], 60.00th=[ 241], 00:26:29.426 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 262], 95.00th=[ 275], 00:26:29.426 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 351], 99.95th=[ 351], 00:26:29.426 | 99.99th=[ 351] 00:26:29.426 bw ( KiB/s): min= 256, max= 384, per=4.42%, avg=286.40, stdev=48.11, samples=20 00:26:29.426 iops : min= 64, max= 96, avg=71.60, stdev=12.03, samples=20 00:26:29.426 lat (msec) : 100=1.38%, 250=73.83%, 500=24.79% 00:26:29.426 cpu : usr=98.41%, sys=1.23%, ctx=15, majf=0, minf=28 00:26:29.426 IO depths : 1=1.8%, 2=6.2%, 4=19.4%, 8=61.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:29.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.426 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename0: (groupid=0, jobs=1): err= 0: pid=1034377: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10005msec) 00:26:29.427 slat (usec): min=10, max=100, avg=25.10, stdev=12.16 00:26:29.427 clat (msec): min=13, max=455, avg=256.33, stdev=64.22 00:26:29.427 lat (msec): min=13, max=455, avg=256.36, stdev=64.22 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 14], 5.00th=[ 153], 10.00th=[ 184], 20.00th=[ 228], 00:26:29.427 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 271], 00:26:29.427 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 334], 95.00th=[ 338], 00:26:29.427 | 99.00th=[ 376], 99.50th=[ 397], 99.90th=[ 456], 99.95th=[ 456], 00:26:29.427 | 99.99th=[ 456] 00:26:29.427 bw ( KiB/s): min= 128, max= 368, per=3.75%, avg=243.20, stdev=55.81, samples=20 00:26:29.427 iops : min= 32, max= 92, avg=60.80, stdev=13.95, samples=20 00:26:29.427 lat (msec) : 20=2.56%, 100=0.64%, 250=25.64%, 500=71.15% 00:26:29.427 cpu : usr=97.93%, sys=1.47%, ctx=31, majf=0, minf=25 00:26:29.427 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename0: (groupid=0, jobs=1): err= 0: pid=1034378: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10005msec) 00:26:29.427 slat (usec): min=8, max=101, avg=24.75, stdev=13.00 00:26:29.427 clat (msec): min=13, max=485, avg=249.89, stdev=70.08 00:26:29.427 lat (msec): min=13, max=485, avg=249.91, stdev=70.08 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 16], 5.00th=[ 131], 10.00th=[ 174], 20.00th=[ 203], 00:26:29.427 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 264], 00:26:29.427 | 70.00th=[ 275], 80.00th=[ 300], 90.00th=[ 326], 95.00th=[ 359], 00:26:29.427 | 99.00th=[ 405], 99.50th=[ 430], 99.90th=[ 485], 99.95th=[ 485], 00:26:29.427 | 99.99th=[ 485] 00:26:29.427 bw ( KiB/s): min= 128, max= 384, per=3.74%, avg=242.53, stdev=55.49, samples=19 00:26:29.427 iops : min= 32, max= 96, avg=60.63, stdev=13.87, samples=19 00:26:29.427 lat (msec) : 20=2.50%, 50=0.31%, 250=41.09%, 500=56.09% 00:26:29.427 cpu : usr=97.94%, sys=1.45%, ctx=33, majf=0, minf=28 00:26:29.427 IO depths : 1=3.6%, 2=9.8%, 4=25.0%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename0: (groupid=0, jobs=1): err= 0: pid=1034379: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=76, BW=307KiB/s (315kB/s)(3080KiB/10020msec) 00:26:29.427 slat (usec): min=8, max=102, avg=24.20, stdev=22.14 00:26:29.427 clat (msec): min=80, max=342, avg=207.96, stdev=46.70 00:26:29.427 lat (msec): min=80, max=342, avg=207.98, stdev=46.71 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 82], 5.00th=[ 108], 10.00th=[ 161], 20.00th=[ 178], 00:26:29.427 | 30.00th=[ 184], 40.00th=[ 197], 50.00th=[ 205], 60.00th=[ 220], 00:26:29.427 | 70.00th=[ 236], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 284], 00:26:29.427 | 99.00th=[ 321], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:26:29.427 | 99.99th=[ 342] 00:26:29.427 bw ( KiB/s): min= 224, max= 384, per=4.65%, avg=301.60, stdev=49.87, samples=20 00:26:29.427 iops : min= 56, max= 96, avg=75.40, stdev=12.47, samples=20 00:26:29.427 lat (msec) : 100=2.08%, 250=82.08%, 500=15.84% 00:26:29.427 cpu : usr=97.72%, sys=1.71%, ctx=82, majf=0, minf=42 00:26:29.427 IO depths : 1=1.4%, 2=3.9%, 4=13.1%, 8=70.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=90.6%, 8=4.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename0: (groupid=0, jobs=1): err= 0: pid=1034380: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=65, BW=262KiB/s (269kB/s)(2624KiB/10006msec) 00:26:29.427 slat (nsec): min=8500, max=90405, avg=34846.73, stdev=20646.85 00:26:29.427 clat (msec): min=117, max=380, avg=243.77, stdev=42.41 00:26:29.427 lat (msec): min=117, max=380, avg=243.80, stdev=42.41 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 150], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 199], 00:26:29.427 | 30.00th=[ 220], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:26:29.427 | 70.00th=[ 262], 80.00th=[ 279], 90.00th=[ 305], 95.00th=[ 313], 00:26:29.427 | 99.00th=[ 326], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:26:29.427 | 99.99th=[ 380] 00:26:29.427 bw ( KiB/s): min= 144, max= 384, per=3.94%, avg=256.00, stdev=40.21, samples=20 00:26:29.427 iops : min= 36, max= 96, avg=64.00, stdev=10.05, samples=20 00:26:29.427 lat (msec) : 250=50.91%, 500=49.09% 00:26:29.427 cpu : usr=97.99%, sys=1.58%, ctx=18, majf=0, minf=40 00:26:29.427 IO depths : 1=2.1%, 2=8.4%, 4=25.0%, 8=54.1%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename0: (groupid=0, jobs=1): err= 0: pid=1034381: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=65, BW=262KiB/s (269kB/s)(2624KiB/10006msec) 00:26:29.427 slat (usec): min=8, max=111, avg=46.09, stdev=22.96 00:26:29.427 clat (msec): min=150, max=374, avg=243.63, stdev=37.34 00:26:29.427 lat (msec): min=150, max=374, avg=243.68, stdev=37.34 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 190], 20.00th=[ 203], 00:26:29.427 | 30.00th=[ 226], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:26:29.427 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 313], 00:26:29.427 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 376], 99.95th=[ 376], 00:26:29.427 | 99.99th=[ 376] 00:26:29.427 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=256.00, stdev=41.53, samples=20 00:26:29.427 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:26:29.427 lat (msec) : 250=51.07%, 500=48.93% 00:26:29.427 cpu : usr=97.83%, sys=1.55%, ctx=60, majf=0, minf=34 00:26:29.427 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename1: (groupid=0, jobs=1): err= 0: pid=1034382: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=66, BW=265KiB/s (271kB/s)(2648KiB/10010msec) 00:26:29.427 slat (nsec): min=8335, max=86408, avg=36420.81, stdev=17786.88 00:26:29.427 clat (msec): min=59, max=400, avg=241.49, stdev=43.86 00:26:29.427 lat (msec): min=59, max=401, avg=241.53, stdev=43.86 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 167], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 201], 00:26:29.427 | 30.00th=[ 211], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:26:29.427 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 313], 00:26:29.427 | 99.00th=[ 334], 99.50th=[ 372], 99.90th=[ 401], 99.95th=[ 401], 00:26:29.427 | 99.99th=[ 401] 00:26:29.427 bw ( KiB/s): min= 240, max= 384, per=4.05%, avg=262.40, stdev=29.09, samples=20 00:26:29.427 iops : min= 60, max= 96, avg=65.60, stdev= 7.27, samples=20 00:26:29.427 lat (msec) : 100=0.91%, 250=50.45%, 500=48.64% 00:26:29.427 cpu : usr=98.14%, sys=1.45%, ctx=19, majf=0, minf=30 00:26:29.427 IO depths : 1=4.5%, 2=10.6%, 4=24.3%, 8=52.6%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=93.9%, 8=0.4%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.427 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.427 filename1: (groupid=0, jobs=1): err= 0: pid=1034383: Thu Jul 25 14:27:57 2024 00:26:29.427 read: IOPS=74, BW=300KiB/s (307kB/s)(3000KiB/10010msec) 00:26:29.427 slat (usec): min=8, max=103, avg=30.55, stdev=21.86 00:26:29.427 clat (msec): min=78, max=367, avg=213.21, stdev=42.67 00:26:29.427 lat (msec): min=78, max=367, avg=213.24, stdev=42.68 00:26:29.427 clat percentiles (msec): 00:26:29.427 | 1.00th=[ 110], 5.00th=[ 146], 10.00th=[ 169], 20.00th=[ 184], 00:26:29.427 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 209], 60.00th=[ 224], 00:26:29.427 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 257], 95.00th=[ 279], 00:26:29.427 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 368], 99.95th=[ 368], 00:26:29.427 | 99.99th=[ 368] 00:26:29.427 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=297.60, stdev=50.70, samples=20 00:26:29.427 iops : min= 64, max= 96, avg=74.40, stdev=12.68, samples=20 00:26:29.427 lat (msec) : 100=0.80%, 250=77.60%, 500=21.60% 00:26:29.427 cpu : usr=97.93%, sys=1.51%, ctx=30, majf=0, minf=28 00:26:29.427 IO depths : 1=1.6%, 2=4.8%, 4=15.6%, 8=66.9%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:29.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.427 complete : 0=0.0%, 4=91.4%, 8=3.2%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034384: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10009msec) 00:26:29.428 slat (nsec): min=6215, max=71480, avg=26037.64, stdev=12435.11 00:26:29.428 clat (msec): min=12, max=453, avg=249.92, stdev=63.17 00:26:29.428 lat (msec): min=12, max=453, avg=249.95, stdev=63.17 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 13], 5.00th=[ 165], 10.00th=[ 184], 20.00th=[ 197], 00:26:29.428 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 259], 00:26:29.428 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 313], 95.00th=[ 321], 00:26:29.428 | 99.00th=[ 401], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 456], 00:26:29.428 | 99.99th=[ 456] 00:26:29.428 bw ( KiB/s): min= 128, max= 384, per=3.78%, avg=245.05, stdev=52.54, samples=19 00:26:29.428 iops : min= 32, max= 96, avg=61.26, stdev=13.14, samples=19 00:26:29.428 lat (msec) : 20=1.56%, 50=0.94%, 250=39.38%, 500=58.13% 00:26:29.428 cpu : usr=98.12%, sys=1.38%, ctx=57, majf=0, minf=38 00:26:29.428 IO depths : 1=3.1%, 2=8.9%, 4=23.6%, 8=55.0%, 16=9.4%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034385: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10020msec) 00:26:29.428 slat (usec): min=6, max=111, avg=46.12, stdev=25.02 00:26:29.428 clat (msec): min=33, max=437, avg=227.15, stdev=68.55 00:26:29.428 lat (msec): min=33, max=437, avg=227.20, stdev=68.56 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 34], 5.00th=[ 80], 10.00th=[ 131], 20.00th=[ 182], 00:26:29.428 | 30.00th=[ 205], 40.00th=[ 230], 50.00th=[ 247], 60.00th=[ 251], 00:26:29.428 | 70.00th=[ 257], 80.00th=[ 271], 90.00th=[ 288], 95.00th=[ 321], 00:26:29.428 | 99.00th=[ 380], 99.50th=[ 405], 99.90th=[ 439], 99.95th=[ 439], 00:26:29.428 | 99.99th=[ 439] 00:26:29.428 bw ( KiB/s): min= 128, max= 625, per=4.25%, avg=275.25, stdev=99.31, samples=20 00:26:29.428 iops : min= 32, max= 156, avg=68.80, stdev=24.78, samples=20 00:26:29.428 lat (msec) : 50=4.55%, 100=2.27%, 250=52.13%, 500=41.05% 00:26:29.428 cpu : usr=98.15%, sys=1.44%, ctx=14, majf=0, minf=37 00:26:29.428 IO depths : 1=2.6%, 2=8.8%, 4=25.0%, 8=53.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034386: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=70, BW=280KiB/s (287kB/s)(2808KiB/10015msec) 00:26:29.428 slat (nsec): min=8157, max=75718, avg=23111.38, stdev=12749.02 00:26:29.428 clat (msec): min=19, max=403, avg=227.94, stdev=45.06 00:26:29.428 lat (msec): min=19, max=403, avg=227.97, stdev=45.06 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 120], 5.00th=[ 169], 10.00th=[ 182], 20.00th=[ 190], 00:26:29.428 | 30.00th=[ 203], 40.00th=[ 215], 50.00th=[ 241], 60.00th=[ 247], 00:26:29.428 | 70.00th=[ 253], 80.00th=[ 259], 90.00th=[ 271], 95.00th=[ 292], 00:26:29.428 | 99.00th=[ 342], 99.50th=[ 363], 99.90th=[ 405], 99.95th=[ 405], 00:26:29.428 | 99.99th=[ 405] 00:26:29.428 bw ( KiB/s): min= 224, max= 400, per=4.23%, avg=274.40, stdev=45.04, samples=20 00:26:29.428 iops : min= 56, max= 100, avg=68.60, stdev=11.26, samples=20 00:26:29.428 lat (msec) : 20=0.28%, 50=0.57%, 250=63.39%, 500=35.75% 00:26:29.428 cpu : usr=98.24%, sys=1.28%, ctx=34, majf=0, minf=30 00:26:29.428 IO depths : 1=2.3%, 2=6.4%, 4=18.4%, 8=62.5%, 16=10.4%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034387: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10006msec) 00:26:29.428 slat (usec): min=7, max=108, avg=24.33, stdev= 9.92 00:26:29.428 clat (msec): min=13, max=375, avg=256.33, stdev=61.36 00:26:29.428 lat (msec): min=13, max=375, avg=256.35, stdev=61.35 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 14], 5.00th=[ 153], 10.00th=[ 184], 20.00th=[ 228], 00:26:29.428 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 271], 00:26:29.428 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 334], 95.00th=[ 338], 00:26:29.428 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 376], 99.95th=[ 376], 00:26:29.428 | 99.99th=[ 376] 00:26:29.428 bw ( KiB/s): min= 128, max= 368, per=3.75%, avg=243.20, stdev=55.81, samples=20 00:26:29.428 iops : min= 32, max= 92, avg=60.80, stdev=13.95, samples=20 00:26:29.428 lat (msec) : 20=2.56%, 250=26.60%, 500=70.83% 00:26:29.428 cpu : usr=97.85%, sys=1.56%, ctx=22, majf=0, minf=25 00:26:29.428 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034388: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=63, BW=255KiB/s (261kB/s)(2560KiB/10045msec) 00:26:29.428 slat (nsec): min=6224, max=88442, avg=24956.46, stdev=11271.02 00:26:29.428 clat (msec): min=15, max=423, avg=249.98, stdev=65.70 00:26:29.428 lat (msec): min=15, max=423, avg=250.00, stdev=65.70 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 16], 5.00th=[ 131], 10.00th=[ 174], 20.00th=[ 205], 00:26:29.428 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 264], 00:26:29.428 | 70.00th=[ 271], 80.00th=[ 296], 90.00th=[ 313], 95.00th=[ 359], 00:26:29.428 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 426], 99.95th=[ 426], 00:26:29.428 | 99.99th=[ 426] 00:26:29.428 bw ( KiB/s): min= 128, max= 496, per=3.94%, avg=255.20, stdev=80.50, samples=20 00:26:29.428 iops : min= 32, max= 124, avg=63.80, stdev=20.12, samples=20 00:26:29.428 lat (msec) : 20=2.19%, 50=0.62%, 250=39.69%, 500=57.50% 00:26:29.428 cpu : usr=97.95%, sys=1.43%, ctx=55, majf=0, minf=34 00:26:29.428 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename1: (groupid=0, jobs=1): err= 0: pid=1034389: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=70, BW=281KiB/s (288kB/s)(2816KiB/10021msec) 00:26:29.428 slat (nsec): min=3962, max=66544, avg=25060.57, stdev=11306.16 00:26:29.428 clat (msec): min=103, max=440, avg=227.47, stdev=44.78 00:26:29.428 lat (msec): min=103, max=440, avg=227.50, stdev=44.78 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 104], 5.00th=[ 155], 10.00th=[ 176], 20.00th=[ 186], 00:26:29.428 | 30.00th=[ 205], 40.00th=[ 213], 50.00th=[ 239], 60.00th=[ 247], 00:26:29.428 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 275], 95.00th=[ 300], 00:26:29.428 | 99.00th=[ 313], 99.50th=[ 351], 99.90th=[ 443], 99.95th=[ 443], 00:26:29.428 | 99.99th=[ 443] 00:26:29.428 bw ( KiB/s): min= 128, max= 384, per=4.25%, avg=275.20, stdev=61.11, samples=20 00:26:29.428 iops : min= 32, max= 96, avg=68.80, stdev=15.28, samples=20 00:26:29.428 lat (msec) : 250=65.20%, 500=34.80% 00:26:29.428 cpu : usr=98.40%, sys=1.23%, ctx=15, majf=0, minf=31 00:26:29.428 IO depths : 1=2.4%, 2=8.7%, 4=25.0%, 8=53.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.428 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.428 filename2: (groupid=0, jobs=1): err= 0: pid=1034390: Thu Jul 25 14:27:57 2024 00:26:29.428 read: IOPS=88, BW=355KiB/s (364kB/s)(3560KiB/10024msec) 00:26:29.428 slat (nsec): min=6521, max=78318, avg=15821.36, stdev=13967.08 00:26:29.428 clat (msec): min=46, max=317, avg=180.02, stdev=45.99 00:26:29.428 lat (msec): min=46, max=317, avg=180.04, stdev=45.99 00:26:29.428 clat percentiles (msec): 00:26:29.428 | 1.00th=[ 47], 5.00th=[ 92], 10.00th=[ 118], 20.00th=[ 153], 00:26:29.428 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 188], 60.00th=[ 192], 00:26:29.428 | 70.00th=[ 201], 80.00th=[ 211], 90.00th=[ 232], 95.00th=[ 241], 00:26:29.428 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 317], 99.95th=[ 317], 00:26:29.428 | 99.99th=[ 317] 00:26:29.428 bw ( KiB/s): min= 256, max= 513, per=5.39%, avg=349.65, stdev=58.25, samples=20 00:26:29.428 iops : min= 64, max= 128, avg=87.40, stdev=14.53, samples=20 00:26:29.428 lat (msec) : 50=1.80%, 100=5.62%, 250=88.76%, 500=3.82% 00:26:29.428 cpu : usr=98.34%, sys=1.29%, ctx=19, majf=0, minf=29 00:26:29.428 IO depths : 1=1.0%, 2=3.7%, 4=14.0%, 8=69.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:26:29.428 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.428 issued rwts: total=890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034391: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=65, BW=262KiB/s (269kB/s)(2624KiB/10006msec) 00:26:29.429 slat (usec): min=8, max=106, avg=44.58, stdev=23.64 00:26:29.429 clat (msec): min=119, max=397, avg=243.66, stdev=45.33 00:26:29.429 lat (msec): min=119, max=397, avg=243.71, stdev=45.33 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 130], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 201], 00:26:29.429 | 30.00th=[ 220], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:26:29.429 | 70.00th=[ 262], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 313], 00:26:29.429 | 99.00th=[ 380], 99.50th=[ 393], 99.90th=[ 397], 99.95th=[ 397], 00:26:29.429 | 99.99th=[ 397] 00:26:29.429 bw ( KiB/s): min= 128, max= 384, per=3.94%, avg=256.00, stdev=41.53, samples=20 00:26:29.429 iops : min= 32, max= 96, avg=64.00, stdev=10.38, samples=20 00:26:29.429 lat (msec) : 250=52.90%, 500=47.10% 00:26:29.429 cpu : usr=97.85%, sys=1.57%, ctx=52, majf=0, minf=40 00:26:29.429 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034392: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10010msec) 00:26:29.429 slat (nsec): min=8296, max=87819, avg=36068.79, stdev=19103.05 00:26:29.429 clat (msec): min=15, max=384, avg=249.96, stdev=56.06 00:26:29.429 lat (msec): min=15, max=384, avg=250.00, stdev=56.06 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 16], 5.00th=[ 174], 10.00th=[ 184], 20.00th=[ 220], 00:26:29.429 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 262], 00:26:29.429 | 70.00th=[ 275], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 313], 00:26:29.429 | 99.00th=[ 347], 99.50th=[ 384], 99.90th=[ 384], 99.95th=[ 384], 00:26:29.429 | 99.99th=[ 384] 00:26:29.429 bw ( KiB/s): min= 144, max= 368, per=3.85%, avg=249.26, stdev=46.29, samples=19 00:26:29.429 iops : min= 36, max= 92, avg=62.32, stdev=11.57, samples=19 00:26:29.429 lat (msec) : 20=2.50%, 250=39.06%, 500=58.44% 00:26:29.429 cpu : usr=98.11%, sys=1.43%, ctx=20, majf=0, minf=20 00:26:29.429 IO depths : 1=1.7%, 2=8.0%, 4=25.0%, 8=54.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034393: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=68, BW=275KiB/s (282kB/s)(2752KiB/10009msec) 00:26:29.429 slat (usec): min=8, max=101, avg=33.72, stdev=18.30 00:26:29.429 clat (msec): min=135, max=335, avg=232.40, stdev=42.45 00:26:29.429 lat (msec): min=135, max=335, avg=232.43, stdev=42.45 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 136], 5.00th=[ 150], 10.00th=[ 184], 20.00th=[ 192], 00:26:29.429 | 30.00th=[ 207], 40.00th=[ 228], 50.00th=[ 247], 60.00th=[ 251], 00:26:29.429 | 70.00th=[ 253], 80.00th=[ 259], 90.00th=[ 279], 95.00th=[ 300], 00:26:29.429 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:26:29.429 | 99.99th=[ 338] 00:26:29.429 bw ( KiB/s): min= 128, max= 384, per=4.14%, avg=268.80, stdev=53.85, samples=20 00:26:29.429 iops : min= 32, max= 96, avg=67.20, stdev=13.46, samples=20 00:26:29.429 lat (msec) : 250=59.74%, 500=40.26% 00:26:29.429 cpu : usr=97.94%, sys=1.49%, ctx=47, majf=0, minf=55 00:26:29.429 IO depths : 1=4.9%, 2=11.0%, 4=24.6%, 8=51.9%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034394: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10005msec) 00:26:29.429 slat (usec): min=8, max=102, avg=62.41, stdev=19.01 00:26:29.429 clat (msec): min=13, max=489, avg=256.01, stdev=67.61 00:26:29.429 lat (msec): min=13, max=489, avg=256.07, stdev=67.61 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 14], 5.00th=[ 153], 10.00th=[ 184], 20.00th=[ 211], 00:26:29.429 | 30.00th=[ 249], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 271], 00:26:29.429 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 338], 95.00th=[ 342], 00:26:29.429 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 489], 99.95th=[ 489], 00:26:29.429 | 99.99th=[ 489] 00:26:29.429 bw ( KiB/s): min= 128, max= 368, per=3.75%, avg=243.20, stdev=53.60, samples=20 00:26:29.429 iops : min= 32, max= 92, avg=60.80, stdev=13.40, samples=20 00:26:29.429 lat (msec) : 20=2.88%, 100=0.32%, 250=27.56%, 500=69.23% 00:26:29.429 cpu : usr=97.58%, sys=1.72%, ctx=61, majf=0, minf=34 00:26:29.429 IO depths : 1=3.4%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.1%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034395: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=67, BW=269KiB/s (275kB/s)(2688KiB/10006msec) 00:26:29.429 slat (nsec): min=10997, max=86724, avg=27595.37, stdev=10244.50 00:26:29.429 clat (msec): min=102, max=338, avg=237.98, stdev=45.70 00:26:29.429 lat (msec): min=102, max=338, avg=238.01, stdev=45.70 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 103], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 190], 00:26:29.429 | 30.00th=[ 209], 40.00th=[ 230], 50.00th=[ 247], 60.00th=[ 251], 00:26:29.429 | 70.00th=[ 255], 80.00th=[ 275], 90.00th=[ 313], 95.00th=[ 313], 00:26:29.429 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 338], 99.95th=[ 338], 00:26:29.429 | 99.99th=[ 338] 00:26:29.429 bw ( KiB/s): min= 128, max= 384, per=4.05%, avg=262.40, stdev=62.16, samples=20 00:26:29.429 iops : min= 32, max= 96, avg=65.60, stdev=15.54, samples=20 00:26:29.429 lat (msec) : 250=56.55%, 500=43.45% 00:26:29.429 cpu : usr=97.37%, sys=1.91%, ctx=39, majf=0, minf=26 00:26:29.429 IO depths : 1=4.9%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034396: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=62, BW=249KiB/s (255kB/s)(2496KiB/10005msec) 00:26:29.429 slat (nsec): min=8518, max=74612, avg=19986.65, stdev=15189.02 00:26:29.429 clat (msec): min=13, max=360, avg=256.35, stdev=62.52 00:26:29.429 lat (msec): min=13, max=360, avg=256.37, stdev=62.51 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 14], 5.00th=[ 153], 10.00th=[ 184], 20.00th=[ 230], 00:26:29.429 | 30.00th=[ 253], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 271], 00:26:29.429 | 70.00th=[ 279], 80.00th=[ 300], 90.00th=[ 334], 95.00th=[ 338], 00:26:29.429 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 359], 99.95th=[ 359], 00:26:29.429 | 99.99th=[ 359] 00:26:29.429 bw ( KiB/s): min= 128, max= 384, per=3.75%, avg=243.20, stdev=55.81, samples=20 00:26:29.429 iops : min= 32, max= 96, avg=60.80, stdev=13.95, samples=20 00:26:29.429 lat (msec) : 20=2.56%, 100=0.32%, 250=25.64%, 500=71.47% 00:26:29.429 cpu : usr=98.02%, sys=1.44%, ctx=45, majf=0, minf=31 00:26:29.429 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.429 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.429 filename2: (groupid=0, jobs=1): err= 0: pid=1034397: Thu Jul 25 14:27:57 2024 00:26:29.429 read: IOPS=63, BW=256KiB/s (262kB/s)(2560KiB/10006msec) 00:26:29.429 slat (usec): min=8, max=109, avg=57.01, stdev=23.84 00:26:29.429 clat (msec): min=9, max=400, avg=249.68, stdev=75.54 00:26:29.429 lat (msec): min=9, max=400, avg=249.74, stdev=75.55 00:26:29.429 clat percentiles (msec): 00:26:29.429 | 1.00th=[ 10], 5.00th=[ 21], 10.00th=[ 174], 20.00th=[ 205], 00:26:29.429 | 30.00th=[ 234], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 266], 00:26:29.429 | 70.00th=[ 279], 80.00th=[ 305], 90.00th=[ 338], 95.00th=[ 338], 00:26:29.429 | 99.00th=[ 401], 99.50th=[ 401], 99.90th=[ 401], 99.95th=[ 401], 00:26:29.429 | 99.99th=[ 401] 00:26:29.429 bw ( KiB/s): min= 128, max= 384, per=3.85%, avg=249.60, stdev=64.29, samples=20 00:26:29.429 iops : min= 32, max= 96, avg=62.40, stdev=16.07, samples=20 00:26:29.429 lat (msec) : 10=2.81%, 20=2.19%, 100=0.62%, 250=29.06%, 500=65.31% 00:26:29.429 cpu : usr=97.83%, sys=1.55%, ctx=47, majf=0, minf=32 00:26:29.429 IO depths : 1=3.3%, 2=9.5%, 4=25.0%, 8=53.0%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:29.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.429 issued rwts: total=640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.430 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.430 00:26:29.430 Run status group 0 (all jobs): 00:26:29.430 READ: bw=6475KiB/s (6630kB/s), 249KiB/s-355KiB/s (255kB/s-364kB/s), io=63.5MiB (66.6MB), run=10005-10045msec 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 bdev_null0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 [2024-07-25 14:27:57.663730] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 bdev_null1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.430 { 00:26:29.430 "params": { 00:26:29.430 "name": "Nvme$subsystem", 00:26:29.430 "trtype": "$TEST_TRANSPORT", 00:26:29.430 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.430 "adrfam": "ipv4", 00:26:29.430 "trsvcid": "$NVMF_PORT", 00:26:29.430 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.430 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.430 "hdgst": ${hdgst:-false}, 00:26:29.430 "ddgst": ${ddgst:-false} 00:26:29.430 }, 00:26:29.430 "method": "bdev_nvme_attach_controller" 00:26:29.430 } 00:26:29.430 EOF 00:26:29.430 )") 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:29.430 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.431 { 00:26:29.431 "params": { 00:26:29.431 "name": "Nvme$subsystem", 00:26:29.431 "trtype": "$TEST_TRANSPORT", 00:26:29.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.431 "adrfam": "ipv4", 00:26:29.431 "trsvcid": "$NVMF_PORT", 00:26:29.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.431 "hdgst": ${hdgst:-false}, 00:26:29.431 "ddgst": ${ddgst:-false} 00:26:29.431 }, 00:26:29.431 "method": "bdev_nvme_attach_controller" 00:26:29.431 } 00:26:29.431 EOF 00:26:29.431 )") 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.431 "params": { 00:26:29.431 "name": "Nvme0", 00:26:29.431 "trtype": "tcp", 00:26:29.431 "traddr": "10.0.0.2", 00:26:29.431 "adrfam": "ipv4", 00:26:29.431 "trsvcid": "4420", 00:26:29.431 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.431 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.431 "hdgst": false, 00:26:29.431 "ddgst": false 00:26:29.431 }, 00:26:29.431 "method": "bdev_nvme_attach_controller" 00:26:29.431 },{ 00:26:29.431 "params": { 00:26:29.431 "name": "Nvme1", 00:26:29.431 "trtype": "tcp", 00:26:29.431 "traddr": "10.0.0.2", 00:26:29.431 "adrfam": "ipv4", 00:26:29.431 "trsvcid": "4420", 00:26:29.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.431 "hdgst": false, 00:26:29.431 "ddgst": false 00:26:29.431 }, 00:26:29.431 "method": "bdev_nvme_attach_controller" 00:26:29.431 }' 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:29.431 14:27:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.431 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:29.431 ... 00:26:29.431 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:29.431 ... 00:26:29.431 fio-3.35 00:26:29.431 Starting 4 threads 00:26:29.431 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.741 00:26:34.741 filename0: (groupid=0, jobs=1): err= 0: pid=1035901: Thu Jul 25 14:28:03 2024 00:26:34.741 read: IOPS=1887, BW=14.7MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:26:34.741 slat (nsec): min=4198, max=67060, avg=20642.28, stdev=8934.08 00:26:34.741 clat (usec): min=846, max=8088, avg=4166.28, stdev=618.79 00:26:34.741 lat (usec): min=867, max=8114, avg=4186.92, stdev=618.69 00:26:34.741 clat percentiles (usec): 00:26:34.741 | 1.00th=[ 2245], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 3916], 00:26:34.741 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:34.741 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5276], 00:26:34.741 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7242], 99.95th=[ 7308], 00:26:34.741 | 99.99th=[ 8094] 00:26:34.741 bw ( KiB/s): min=14672, max=15536, per=24.78%, avg=15096.00, stdev=224.67, samples=10 00:26:34.741 iops : min= 1834, max= 1942, avg=1887.00, stdev=28.08, samples=10 00:26:34.741 lat (usec) : 1000=0.05% 00:26:34.741 lat (msec) : 2=0.62%, 4=27.87%, 10=71.45% 00:26:34.741 cpu : usr=95.16%, sys=3.92%, ctx=34, majf=0, minf=9 00:26:34.741 IO depths : 1=0.1%, 2=15.2%, 4=57.2%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.741 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.741 issued rwts: total=9440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.741 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.741 filename0: (groupid=0, jobs=1): err= 0: pid=1035902: Thu Jul 25 14:28:03 2024 00:26:34.742 read: IOPS=1934, BW=15.1MiB/s (15.8MB/s)(75.6MiB/5003msec) 00:26:34.742 slat (nsec): min=6282, max=74337, avg=14340.78, stdev=8503.88 00:26:34.742 clat (usec): min=981, max=7426, avg=4086.73, stdev=544.71 00:26:34.742 lat (usec): min=995, max=7448, avg=4101.07, stdev=544.82 00:26:34.742 clat percentiles (usec): 00:26:34.742 | 1.00th=[ 2442], 5.00th=[ 3228], 10.00th=[ 3490], 20.00th=[ 3818], 00:26:34.742 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:34.742 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4883], 00:26:34.742 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6849], 99.95th=[ 7046], 00:26:34.742 | 99.99th=[ 7439] 00:26:34.742 bw ( KiB/s): min=15120, max=16080, per=25.41%, avg=15480.00, stdev=347.47, samples=10 00:26:34.742 iops : min= 1890, max= 2010, avg=1935.00, stdev=43.43, samples=10 00:26:34.742 lat (usec) : 1000=0.01% 00:26:34.742 lat (msec) : 2=0.44%, 4=31.52%, 10=68.03% 00:26:34.742 cpu : usr=95.16%, sys=4.34%, ctx=14, majf=0, minf=0 00:26:34.742 IO depths : 1=0.4%, 2=10.5%, 4=61.5%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 issued rwts: total=9680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.742 filename1: (groupid=0, jobs=1): err= 0: pid=1035903: Thu Jul 25 14:28:03 2024 00:26:34.742 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5003msec) 00:26:34.742 slat (nsec): min=3843, max=70620, avg=16025.47, stdev=9566.05 00:26:34.742 clat (usec): min=1064, max=7542, avg=4162.55, stdev=555.06 00:26:34.742 lat (usec): min=1077, max=7593, avg=4178.58, stdev=554.81 00:26:34.742 clat percentiles (usec): 00:26:34.742 | 1.00th=[ 2737], 5.00th=[ 3359], 10.00th=[ 3621], 20.00th=[ 3884], 00:26:34.742 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:26:34.742 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4686], 95.00th=[ 5080], 00:26:34.742 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7308], 99.95th=[ 7439], 00:26:34.742 | 99.99th=[ 7570] 00:26:34.742 bw ( KiB/s): min=14813, max=15424, per=24.91%, avg=15177.30, stdev=178.40, samples=10 00:26:34.742 iops : min= 1851, max= 1928, avg=1897.10, stdev=22.44, samples=10 00:26:34.742 lat (msec) : 2=0.15%, 4=27.80%, 10=72.05% 00:26:34.742 cpu : usr=95.08%, sys=4.42%, ctx=9, majf=0, minf=0 00:26:34.742 IO depths : 1=0.2%, 2=13.2%, 4=59.1%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 issued rwts: total=9492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.742 filename1: (groupid=0, jobs=1): err= 0: pid=1035904: Thu Jul 25 14:28:03 2024 00:26:34.742 read: IOPS=1896, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5001msec) 00:26:34.742 slat (nsec): min=6832, max=70563, avg=18683.68, stdev=10188.10 00:26:34.742 clat (usec): min=895, max=7711, avg=4151.68, stdev=613.19 00:26:34.742 lat (usec): min=914, max=7731, avg=4170.37, stdev=612.88 00:26:34.742 clat percentiles (usec): 00:26:34.742 | 1.00th=[ 2442], 5.00th=[ 3326], 10.00th=[ 3589], 20.00th=[ 3916], 00:26:34.742 | 30.00th=[ 4015], 40.00th=[ 4047], 50.00th=[ 4113], 60.00th=[ 4178], 00:26:34.742 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4752], 95.00th=[ 5276], 00:26:34.742 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7177], 99.95th=[ 7373], 00:26:34.742 | 99.99th=[ 7701] 00:26:34.742 bw ( KiB/s): min=14784, max=15488, per=24.90%, avg=15169.40, stdev=207.89, samples=10 00:26:34.742 iops : min= 1848, max= 1936, avg=1896.10, stdev=25.90, samples=10 00:26:34.742 lat (usec) : 1000=0.03% 00:26:34.742 lat (msec) : 2=0.54%, 4=28.72%, 10=70.71% 00:26:34.742 cpu : usr=94.60%, sys=4.90%, ctx=12, majf=0, minf=9 00:26:34.742 IO depths : 1=0.1%, 2=15.2%, 4=56.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:34.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.742 issued rwts: total=9484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.742 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:34.742 00:26:34.742 Run status group 0 (all jobs): 00:26:34.742 READ: bw=59.5MiB/s (62.4MB/s), 14.7MiB/s-15.1MiB/s (15.5MB/s-15.8MB/s), io=298MiB (312MB), run=5001-5003msec 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 00:26:34.742 real 0m24.290s 00:26:34.742 user 4m32.907s 00:26:34.742 sys 0m6.287s 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 ************************************ 00:26:34.742 END TEST fio_dif_rand_params 00:26:34.742 ************************************ 00:26:34.742 14:28:04 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:34.742 14:28:04 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:34.742 14:28:04 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:34.742 14:28:04 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 ************************************ 00:26:34.742 START TEST fio_dif_digest 00:26:34.742 ************************************ 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 bdev_null0 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.742 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:34.743 [2024-07-25 14:28:04.103822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:34.743 { 00:26:34.743 "params": { 00:26:34.743 "name": "Nvme$subsystem", 00:26:34.743 "trtype": "$TEST_TRANSPORT", 00:26:34.743 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:34.743 "adrfam": "ipv4", 00:26:34.743 "trsvcid": "$NVMF_PORT", 00:26:34.743 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:34.743 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:34.743 "hdgst": ${hdgst:-false}, 00:26:34.743 "ddgst": ${ddgst:-false} 00:26:34.743 }, 00:26:34.743 "method": "bdev_nvme_attach_controller" 00:26:34.743 } 00:26:34.743 EOF 00:26:34.743 )") 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:34.743 "params": { 00:26:34.743 "name": "Nvme0", 00:26:34.743 "trtype": "tcp", 00:26:34.743 "traddr": "10.0.0.2", 00:26:34.743 "adrfam": "ipv4", 00:26:34.743 "trsvcid": "4420", 00:26:34.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:34.743 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:34.743 "hdgst": true, 00:26:34.743 "ddgst": true 00:26:34.743 }, 00:26:34.743 "method": "bdev_nvme_attach_controller" 00:26:34.743 }' 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:34.743 14:28:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:34.743 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:34.743 ... 00:26:34.743 fio-3.35 00:26:34.743 Starting 3 threads 00:26:34.743 EAL: No free 2048 kB hugepages reported on node 1 00:26:46.938 00:26:46.938 filename0: (groupid=0, jobs=1): err= 0: pid=1036661: Thu Jul 25 14:28:14 2024 00:26:46.938 read: IOPS=210, BW=26.3MiB/s (27.6MB/s)(265MiB/10045msec) 00:26:46.938 slat (nsec): min=7173, max=89115, avg=21539.30, stdev=7487.64 00:26:46.938 clat (usec): min=11198, max=56614, avg=14188.99, stdev=1537.12 00:26:46.938 lat (usec): min=11218, max=56635, avg=14210.53, stdev=1536.86 00:26:46.938 clat percentiles (usec): 00:26:46.938 | 1.00th=[11863], 5.00th=[12518], 10.00th=[12911], 20.00th=[13435], 00:26:46.938 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14222], 60.00th=[14353], 00:26:46.938 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15664], 00:26:46.938 | 99.00th=[16581], 99.50th=[17433], 99.90th=[20317], 99.95th=[49546], 00:26:46.938 | 99.99th=[56361] 00:26:46.938 bw ( KiB/s): min=26112, max=27904, per=34.58%, avg=27072.00, stdev=397.25, samples=20 00:26:46.938 iops : min= 204, max= 218, avg=211.50, stdev= 3.10, samples=20 00:26:46.938 lat (msec) : 20=99.86%, 50=0.09%, 100=0.05% 00:26:46.938 cpu : usr=90.89%, sys=7.10%, ctx=404, majf=0, minf=118 00:26:46.938 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.938 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:46.938 filename0: (groupid=0, jobs=1): err= 0: pid=1036662: Thu Jul 25 14:28:14 2024 00:26:46.938 read: IOPS=201, BW=25.1MiB/s (26.3MB/s)(252MiB/10044msec) 00:26:46.938 slat (nsec): min=7577, max=52199, avg=16498.71, stdev=4913.22 00:26:46.938 clat (usec): min=10924, max=51324, avg=14883.69, stdev=1459.48 00:26:46.938 lat (usec): min=10936, max=51337, avg=14900.19, stdev=1459.42 00:26:46.938 clat percentiles (usec): 00:26:46.938 | 1.00th=[12780], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:26:46.938 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14746], 60.00th=[15008], 00:26:46.938 | 70.00th=[15270], 80.00th=[15533], 90.00th=[16057], 95.00th=[16450], 00:26:46.938 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[49546], 00:26:46.938 | 99.99th=[51119] 00:26:46.938 bw ( KiB/s): min=25088, max=26368, per=32.98%, avg=25820.10, stdev=380.00, samples=20 00:26:46.938 iops : min= 196, max= 206, avg=201.70, stdev= 2.99, samples=20 00:26:46.938 lat (msec) : 20=99.90%, 50=0.05%, 100=0.05% 00:26:46.938 cpu : usr=93.77%, sys=5.76%, ctx=23, majf=0, minf=188 00:26:46.938 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 issued rwts: total=2019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.938 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:46.938 filename0: (groupid=0, jobs=1): err= 0: pid=1036663: Thu Jul 25 14:28:14 2024 00:26:46.938 read: IOPS=199, BW=25.0MiB/s (26.2MB/s)(251MiB/10044msec) 00:26:46.938 slat (nsec): min=7470, max=49819, avg=16279.91, stdev=5004.34 00:26:46.938 clat (usec): min=11567, max=52493, avg=14971.88, stdev=1458.24 00:26:46.938 lat (usec): min=11580, max=52506, avg=14988.16, stdev=1458.39 00:26:46.938 clat percentiles (usec): 00:26:46.938 | 1.00th=[12780], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:26:46.938 | 30.00th=[14484], 40.00th=[14746], 50.00th=[14877], 60.00th=[15139], 00:26:46.938 | 70.00th=[15401], 80.00th=[15664], 90.00th=[16057], 95.00th=[16450], 00:26:46.938 | 99.00th=[17171], 99.50th=[17695], 99.90th=[17957], 99.95th=[50070], 00:26:46.938 | 99.99th=[52691] 00:26:46.938 bw ( KiB/s): min=24832, max=26112, per=32.78%, avg=25664.00, stdev=397.25, samples=20 00:26:46.938 iops : min= 194, max= 204, avg=200.50, stdev= 3.10, samples=20 00:26:46.938 lat (msec) : 20=99.90%, 100=0.10% 00:26:46.938 cpu : usr=93.27%, sys=6.25%, ctx=22, majf=0, minf=99 00:26:46.938 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:46.938 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.938 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.938 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:46.938 00:26:46.938 Run status group 0 (all jobs): 00:26:46.938 READ: bw=76.4MiB/s (80.2MB/s), 25.0MiB/s-26.3MiB/s (26.2MB/s-27.6MB/s), io=768MiB (805MB), run=10044-10045msec 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:46.938 00:26:46.938 real 0m11.199s 00:26:46.938 user 0m29.031s 00:26:46.938 sys 0m2.245s 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:46.938 14:28:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.938 ************************************ 00:26:46.938 END TEST fio_dif_digest 00:26:46.938 ************************************ 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:26:46.938 14:28:15 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:46.938 14:28:15 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.938 rmmod nvme_tcp 00:26:46.938 rmmod nvme_fabrics 00:26:46.938 rmmod nvme_keyring 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1030508 ']' 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1030508 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1030508 ']' 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1030508 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030508 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030508' 00:26:46.938 killing process with pid 1030508 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1030508 00:26:46.938 14:28:15 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1030508 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:46.938 14:28:15 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:47.197 Waiting for block devices as requested 00:26:47.197 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:47.455 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:47.455 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:47.455 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:47.455 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:47.714 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:47.714 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:47.714 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:47.714 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:47.972 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:47.972 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:47.972 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:47.972 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:48.230 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:48.230 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:48.230 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:48.489 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:48.489 14:28:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.489 14:28:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.489 14:28:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.489 14:28:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.489 14:28:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.489 14:28:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:48.489 14:28:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.030 14:28:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:51.030 00:26:51.030 real 1m6.837s 00:26:51.030 user 6m29.912s 00:26:51.030 sys 0m17.585s 00:26:51.030 14:28:20 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:51.030 14:28:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:51.030 ************************************ 00:26:51.030 END TEST nvmf_dif 00:26:51.030 ************************************ 00:26:51.030 14:28:20 -- common/autotest_common.sh@1142 -- # return 0 00:26:51.030 14:28:20 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:51.030 14:28:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:51.030 14:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:51.030 14:28:20 -- common/autotest_common.sh@10 -- # set +x 00:26:51.030 ************************************ 00:26:51.030 START TEST nvmf_abort_qd_sizes 00:26:51.030 ************************************ 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:51.030 * Looking for test storage... 00:26:51.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.030 14:28:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.935 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.935 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.935 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.935 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.935 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:26:52.936 00:26:52.936 --- 10.0.0.2 ping statistics --- 00:26:52.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.936 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:26:52.936 00:26:52.936 --- 10.0.0.1 ping statistics --- 00:26:52.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.936 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:52.936 14:28:22 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:54.318 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:54.318 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:54.318 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:54.887 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1041571 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1041571 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1041571 ']' 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.145 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.146 14:28:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.146 [2024-07-25 14:28:24.788406] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:26:55.146 [2024-07-25 14:28:24.788503] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.404 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.404 [2024-07-25 14:28:24.859345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.404 [2024-07-25 14:28:24.973869] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.404 [2024-07-25 14:28:24.973923] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.404 [2024-07-25 14:28:24.973951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.404 [2024-07-25 14:28:24.973963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.404 [2024-07-25 14:28:24.973973] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.404 [2024-07-25 14:28:24.974029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.404 [2024-07-25 14:28:24.974092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.404 [2024-07-25 14:28:24.974159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.404 [2024-07-25 14:28:24.974163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.662 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.662 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:55.662 14:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.663 14:28:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.663 ************************************ 00:26:55.663 START TEST spdk_target_abort 00:26:55.663 ************************************ 00:26:55.663 14:28:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:55.663 14:28:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:55.663 14:28:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:55.663 14:28:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.663 14:28:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 spdk_targetn1 00:26:59.011 14:28:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.011 14:28:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:59.011 14:28:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 [2024-07-25 14:28:28.004141] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:59.011 [2024-07-25 14:28:28.036447] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:59.011 14:28:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:59.011 EAL: No free 2048 kB hugepages reported on node 1 00:27:02.298 Initializing NVMe Controllers 00:27:02.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:02.298 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:02.298 Initialization complete. Launching workers. 00:27:02.298 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12457, failed: 0 00:27:02.298 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1301, failed to submit 11156 00:27:02.298 success 712, unsuccess 589, failed 0 00:27:02.298 14:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:02.298 14:28:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:02.298 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.650 Initializing NVMe Controllers 00:27:05.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:05.650 Initialization complete. Launching workers. 00:27:05.650 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8525, failed: 0 00:27:05.650 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1239, failed to submit 7286 00:27:05.650 success 318, unsuccess 921, failed 0 00:27:05.650 14:28:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.650 14:28:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.650 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.938 Initializing NVMe Controllers 00:27:08.938 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.938 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:08.938 Initialization complete. Launching workers. 00:27:08.938 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31292, failed: 0 00:27:08.938 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2669, failed to submit 28623 00:27:08.938 success 503, unsuccess 2166, failed 0 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.938 14:28:37 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1041571 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1041571 ']' 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1041571 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1041571 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1041571' 00:27:09.875 killing process with pid 1041571 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1041571 00:27:09.875 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1041571 00:27:10.135 00:27:10.135 real 0m14.375s 00:27:10.135 user 0m54.081s 00:27:10.135 sys 0m2.767s 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 ************************************ 00:27:10.135 END TEST spdk_target_abort 00:27:10.135 ************************************ 00:27:10.135 14:28:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:10.135 14:28:39 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:10.135 14:28:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:10.135 14:28:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.135 14:28:39 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:10.135 ************************************ 00:27:10.135 START TEST kernel_target_abort 00:27:10.135 ************************************ 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:10.135 14:28:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:11.072 Waiting for block devices as requested 00:27:11.072 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:11.331 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:11.331 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:11.590 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:11.590 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:11.590 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:11.590 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:11.850 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:11.850 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.850 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:11.850 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:12.110 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:12.110 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:12.110 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:12.369 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:12.369 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:12.369 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:12.627 No valid GPT data, bailing 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:12.627 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:12.627 00:27:12.627 Discovery Log Number of Records 2, Generation counter 2 00:27:12.627 =====Discovery Log Entry 0====== 00:27:12.627 trtype: tcp 00:27:12.627 adrfam: ipv4 00:27:12.627 subtype: current discovery subsystem 00:27:12.627 treq: not specified, sq flow control disable supported 00:27:12.627 portid: 1 00:27:12.627 trsvcid: 4420 00:27:12.627 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:12.627 traddr: 10.0.0.1 00:27:12.627 eflags: none 00:27:12.627 sectype: none 00:27:12.627 =====Discovery Log Entry 1====== 00:27:12.627 trtype: tcp 00:27:12.627 adrfam: ipv4 00:27:12.627 subtype: nvme subsystem 00:27:12.627 treq: not specified, sq flow control disable supported 00:27:12.628 portid: 1 00:27:12.628 trsvcid: 4420 00:27:12.628 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:12.628 traddr: 10.0.0.1 00:27:12.628 eflags: none 00:27:12.628 sectype: none 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:12.628 14:28:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.628 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.908 Initializing NVMe Controllers 00:27:15.908 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:15.908 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:15.908 Initialization complete. Launching workers. 00:27:15.908 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55469, failed: 0 00:27:15.908 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55469, failed to submit 0 00:27:15.908 success 0, unsuccess 55469, failed 0 00:27:15.908 14:28:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:15.908 14:28:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:15.908 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.196 Initializing NVMe Controllers 00:27:19.196 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:19.196 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:19.196 Initialization complete. Launching workers. 00:27:19.196 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98024, failed: 0 00:27:19.196 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24738, failed to submit 73286 00:27:19.196 success 0, unsuccess 24738, failed 0 00:27:19.196 14:28:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:19.196 14:28:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:19.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.483 Initializing NVMe Controllers 00:27:22.483 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:22.483 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:22.483 Initialization complete. Launching workers. 00:27:22.483 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94643, failed: 0 00:27:22.483 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23638, failed to submit 71005 00:27:22.483 success 0, unsuccess 23638, failed 0 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:22.483 14:28:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:23.418 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:23.418 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:23.418 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:24.352 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:24.352 00:27:24.352 real 0m14.305s 00:27:24.352 user 0m6.528s 00:27:24.352 sys 0m3.125s 00:27:24.352 14:28:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:24.352 14:28:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:24.352 ************************************ 00:27:24.352 END TEST kernel_target_abort 00:27:24.352 ************************************ 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:24.352 rmmod nvme_tcp 00:27:24.352 rmmod nvme_fabrics 00:27:24.352 rmmod nvme_keyring 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1041571 ']' 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1041571 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1041571 ']' 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1041571 00:27:24.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1041571) - No such process 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1041571 is not found' 00:27:24.352 Process with pid 1041571 is not found 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:24.352 14:28:53 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:25.730 Waiting for block devices as requested 00:27:25.730 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:25.730 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:25.730 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:25.990 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:25.990 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:25.990 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:26.249 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:26.249 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:26.249 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:26.249 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:26.507 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:26.507 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:26.507 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:26.507 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:26.766 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:26.766 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:26.766 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:27.024 14:28:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.936 14:28:58 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.936 00:27:28.936 real 0m38.354s 00:27:28.936 user 1m2.761s 00:27:28.936 sys 0m9.458s 00:27:28.936 14:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:28.936 14:28:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:28.936 ************************************ 00:27:28.936 END TEST nvmf_abort_qd_sizes 00:27:28.936 ************************************ 00:27:28.936 14:28:58 -- common/autotest_common.sh@1142 -- # return 0 00:27:28.936 14:28:58 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:28.936 14:28:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:28.936 14:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:28.936 14:28:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.936 ************************************ 00:27:28.936 START TEST keyring_file 00:27:28.936 ************************************ 00:27:28.936 14:28:58 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:28.936 * Looking for test storage... 00:27:28.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.936 14:28:58 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.936 14:28:58 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.936 14:28:58 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.936 14:28:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.936 14:28:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.936 14:28:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.936 14:28:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:28.936 14:28:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:28.936 14:28:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.9dIqJiK792 00:27:28.936 14:28:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:28.936 14:28:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.9dIqJiK792 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.9dIqJiK792 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.9dIqJiK792 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LJ6H4yqlly 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:29.196 14:28:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LJ6H4yqlly 00:27:29.196 14:28:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LJ6H4yqlly 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.LJ6H4yqlly 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=1047341 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:29.196 14:28:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1047341 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1047341 ']' 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.196 14:28:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.196 [2024-07-25 14:28:58.713699] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:27:29.196 [2024-07-25 14:28:58.713780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047341 ] 00:27:29.196 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.196 [2024-07-25 14:28:58.774152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.456 [2024-07-25 14:28:58.884015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.715 14:28:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.715 14:28:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:29.715 14:28:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:29.715 14:28:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.715 14:28:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.715 [2024-07-25 14:28:59.134095] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.715 null0 00:27:29.715 [2024-07-25 14:28:59.166151] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:29.715 [2024-07-25 14:28:59.166655] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:29.716 [2024-07-25 14:28:59.174143] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.716 14:28:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.716 [2024-07-25 14:28:59.182151] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:29.716 request: 00:27:29.716 { 00:27:29.716 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:29.716 "secure_channel": false, 00:27:29.716 "listen_address": { 00:27:29.716 "trtype": "tcp", 00:27:29.716 "traddr": "127.0.0.1", 00:27:29.716 "trsvcid": "4420" 00:27:29.716 }, 00:27:29.716 "method": "nvmf_subsystem_add_listener", 00:27:29.716 "req_id": 1 00:27:29.716 } 00:27:29.716 Got JSON-RPC error response 00:27:29.716 response: 00:27:29.716 { 00:27:29.716 "code": -32602, 00:27:29.716 "message": "Invalid parameters" 00:27:29.716 } 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:29.716 14:28:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=1047352 00:27:29.716 14:28:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:29.716 14:28:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1047352 /var/tmp/bperf.sock 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1047352 ']' 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.716 14:28:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.716 [2024-07-25 14:28:59.226662] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:27:29.716 [2024-07-25 14:28:59.226736] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1047352 ] 00:27:29.716 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.716 [2024-07-25 14:28:59.281819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.974 [2024-07-25 14:28:59.387427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.974 14:28:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.974 14:28:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:29.974 14:28:59 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:29.974 14:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:30.232 14:28:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LJ6H4yqlly 00:27:30.232 14:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LJ6H4yqlly 00:27:30.490 14:28:59 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:30.490 14:28:59 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:30.490 14:28:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.490 14:28:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.490 14:28:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.748 14:29:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.9dIqJiK792 == \/\t\m\p\/\t\m\p\.\9\d\I\q\J\i\K\7\9\2 ]] 00:27:30.748 14:29:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:30.748 14:29:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:30.748 14:29:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.748 14:29:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.748 14:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.005 14:29:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.LJ6H4yqlly == \/\t\m\p\/\t\m\p\.\L\J\6\H\4\y\q\l\l\y ]] 00:27:31.005 14:29:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:31.005 14:29:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.005 14:29:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.005 14:29:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.005 14:29:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.005 14:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.263 14:29:00 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:31.263 14:29:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:31.263 14:29:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.263 14:29:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.263 14:29:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.263 14:29:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.263 14:29:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.521 14:29:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:31.521 14:29:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.521 14:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.779 [2024-07-25 14:29:01.232525] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:31.779 nvme0n1 00:27:31.779 14:29:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:31.779 14:29:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.779 14:29:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.779 14:29:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.779 14:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.779 14:29:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:32.037 14:29:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:32.037 14:29:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:32.037 14:29:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:32.037 14:29:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.037 14:29:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.037 14:29:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.037 14:29:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:32.295 14:29:01 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:32.295 14:29:01 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:32.295 Running I/O for 1 seconds... 00:27:33.669 00:27:33.669 Latency(us) 00:27:33.669 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.669 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:33.669 nvme0n1 : 1.01 9999.64 39.06 0.00 0.00 12749.24 3762.25 18544.26 00:27:33.669 =================================================================================================================== 00:27:33.669 Total : 9999.64 39.06 0.00 0.00 12749.24 3762.25 18544.26 00:27:33.669 0 00:27:33.669 14:29:02 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:33.669 14:29:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:33.669 14:29:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:33.669 14:29:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.669 14:29:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.669 14:29:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.669 14:29:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.669 14:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.926 14:29:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:33.926 14:29:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:33.926 14:29:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.926 14:29:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.926 14:29:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.926 14:29:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.926 14:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.184 14:29:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:34.184 14:29:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.184 14:29:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.184 14:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:34.442 [2024-07-25 14:29:03.943012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:34.442 [2024-07-25 14:29:03.943904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc599a0 (107): Transport endpoint is not connected 00:27:34.443 [2024-07-25 14:29:03.944892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc599a0 (9): Bad file descriptor 00:27:34.443 [2024-07-25 14:29:03.945890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:34.443 [2024-07-25 14:29:03.945916] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:34.443 [2024-07-25 14:29:03.945956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:34.443 request: 00:27:34.443 { 00:27:34.443 "name": "nvme0", 00:27:34.443 "trtype": "tcp", 00:27:34.443 "traddr": "127.0.0.1", 00:27:34.443 "adrfam": "ipv4", 00:27:34.443 "trsvcid": "4420", 00:27:34.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:34.443 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:34.443 "prchk_reftag": false, 00:27:34.443 "prchk_guard": false, 00:27:34.443 "hdgst": false, 00:27:34.443 "ddgst": false, 00:27:34.443 "psk": "key1", 00:27:34.443 "method": "bdev_nvme_attach_controller", 00:27:34.443 "req_id": 1 00:27:34.443 } 00:27:34.443 Got JSON-RPC error response 00:27:34.443 response: 00:27:34.443 { 00:27:34.443 "code": -5, 00:27:34.443 "message": "Input/output error" 00:27:34.443 } 00:27:34.443 14:29:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:34.443 14:29:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:34.443 14:29:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:34.443 14:29:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:34.443 14:29:03 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:34.443 14:29:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:34.443 14:29:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.443 14:29:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.443 14:29:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.443 14:29:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.701 14:29:04 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:34.701 14:29:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:34.701 14:29:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:34.701 14:29:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.701 14:29:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.701 14:29:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.701 14:29:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.959 14:29:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:34.959 14:29:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.959 14:29:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:35.217 14:29:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:35.217 14:29:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:35.475 14:29:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:35.475 14:29:04 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:35.475 14:29:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.733 14:29:05 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:35.733 14:29:05 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.9dIqJiK792 00:27:35.733 14:29:05 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.733 14:29:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:35.733 14:29:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:35.990 [2024-07-25 14:29:05.424919] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.9dIqJiK792': 0100660 00:27:35.990 [2024-07-25 14:29:05.424963] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:35.990 request: 00:27:35.990 { 00:27:35.990 "name": "key0", 00:27:35.990 "path": "/tmp/tmp.9dIqJiK792", 00:27:35.990 "method": "keyring_file_add_key", 00:27:35.990 "req_id": 1 00:27:35.990 } 00:27:35.990 Got JSON-RPC error response 00:27:35.990 response: 00:27:35.990 { 00:27:35.990 "code": -1, 00:27:35.990 "message": "Operation not permitted" 00:27:35.990 } 00:27:35.990 14:29:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:35.990 14:29:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.990 14:29:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.990 14:29:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.990 14:29:05 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.9dIqJiK792 00:27:35.990 14:29:05 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:35.990 14:29:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.9dIqJiK792 00:27:36.248 14:29:05 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.9dIqJiK792 00:27:36.248 14:29:05 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:36.248 14:29:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.248 14:29:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.248 14:29:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.248 14:29:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.248 14:29:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:36.506 14:29:05 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:36.506 14:29:05 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:36.506 14:29:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.506 14:29:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.764 [2024-07-25 14:29:06.203068] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.9dIqJiK792': No such file or directory 00:27:36.764 [2024-07-25 14:29:06.203126] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:36.764 [2024-07-25 14:29:06.203156] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:36.764 [2024-07-25 14:29:06.203169] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:36.764 [2024-07-25 14:29:06.203182] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:36.764 request: 00:27:36.764 { 00:27:36.764 "name": "nvme0", 00:27:36.764 "trtype": "tcp", 00:27:36.764 "traddr": "127.0.0.1", 00:27:36.764 "adrfam": "ipv4", 00:27:36.764 "trsvcid": "4420", 00:27:36.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.764 "prchk_reftag": false, 00:27:36.764 "prchk_guard": false, 00:27:36.764 "hdgst": false, 00:27:36.764 "ddgst": false, 00:27:36.764 "psk": "key0", 00:27:36.764 "method": "bdev_nvme_attach_controller", 00:27:36.764 "req_id": 1 00:27:36.764 } 00:27:36.764 Got JSON-RPC error response 00:27:36.764 response: 00:27:36.764 { 00:27:36.764 "code": -19, 00:27:36.764 "message": "No such device" 00:27:36.764 } 00:27:36.764 14:29:06 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:36.764 14:29:06 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.764 14:29:06 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.764 14:29:06 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.764 14:29:06 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:36.764 14:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:37.022 14:29:06 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.zupjIynm7t 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:37.022 14:29:06 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.zupjIynm7t 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.zupjIynm7t 00:27:37.022 14:29:06 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.zupjIynm7t 00:27:37.022 14:29:06 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zupjIynm7t 00:27:37.022 14:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zupjIynm7t 00:27:37.280 14:29:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.280 14:29:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:37.538 nvme0n1 00:27:37.538 14:29:07 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:37.538 14:29:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.538 14:29:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.538 14:29:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.538 14:29:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.538 14:29:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.796 14:29:07 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:37.796 14:29:07 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:37.796 14:29:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:38.054 14:29:07 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:38.054 14:29:07 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:38.054 14:29:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.054 14:29:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.054 14:29:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.312 14:29:07 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:38.312 14:29:07 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:38.312 14:29:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:38.312 14:29:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:38.312 14:29:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:38.312 14:29:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.312 14:29:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.570 14:29:08 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:38.570 14:29:08 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:38.570 14:29:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.829 14:29:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:38.829 14:29:08 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:38.829 14:29:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:39.087 14:29:08 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:39.087 14:29:08 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.zupjIynm7t 00:27:39.087 14:29:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.zupjIynm7t 00:27:39.345 14:29:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.LJ6H4yqlly 00:27:39.345 14:29:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.LJ6H4yqlly 00:27:39.603 14:29:09 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.603 14:29:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.861 nvme0n1 00:27:39.861 14:29:09 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:39.861 14:29:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:40.119 14:29:09 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:40.119 "subsystems": [ 00:27:40.119 { 00:27:40.119 "subsystem": "keyring", 00:27:40.119 "config": [ 00:27:40.119 { 00:27:40.119 "method": "keyring_file_add_key", 00:27:40.119 "params": { 00:27:40.119 "name": "key0", 00:27:40.119 "path": "/tmp/tmp.zupjIynm7t" 00:27:40.119 } 00:27:40.119 }, 00:27:40.119 { 00:27:40.119 "method": "keyring_file_add_key", 00:27:40.119 "params": { 00:27:40.119 "name": "key1", 00:27:40.119 "path": "/tmp/tmp.LJ6H4yqlly" 00:27:40.119 } 00:27:40.119 } 00:27:40.119 ] 00:27:40.119 }, 00:27:40.119 { 00:27:40.119 "subsystem": "iobuf", 00:27:40.119 "config": [ 00:27:40.119 { 00:27:40.119 "method": "iobuf_set_options", 00:27:40.119 "params": { 00:27:40.119 "small_pool_count": 8192, 00:27:40.119 "large_pool_count": 1024, 00:27:40.119 "small_bufsize": 8192, 00:27:40.119 "large_bufsize": 135168 00:27:40.119 } 00:27:40.120 } 00:27:40.120 ] 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "subsystem": "sock", 00:27:40.120 "config": [ 00:27:40.120 { 00:27:40.120 "method": "sock_set_default_impl", 00:27:40.120 "params": { 00:27:40.120 "impl_name": "posix" 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "sock_impl_set_options", 00:27:40.120 "params": { 00:27:40.120 "impl_name": "ssl", 00:27:40.120 "recv_buf_size": 4096, 00:27:40.120 "send_buf_size": 4096, 00:27:40.120 "enable_recv_pipe": true, 00:27:40.120 "enable_quickack": false, 00:27:40.120 "enable_placement_id": 0, 00:27:40.120 "enable_zerocopy_send_server": true, 00:27:40.120 "enable_zerocopy_send_client": false, 00:27:40.120 "zerocopy_threshold": 0, 00:27:40.120 "tls_version": 0, 00:27:40.120 "enable_ktls": false 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "sock_impl_set_options", 00:27:40.120 "params": { 00:27:40.120 "impl_name": "posix", 00:27:40.120 "recv_buf_size": 2097152, 00:27:40.120 "send_buf_size": 2097152, 00:27:40.120 "enable_recv_pipe": true, 00:27:40.120 "enable_quickack": false, 00:27:40.120 "enable_placement_id": 0, 00:27:40.120 "enable_zerocopy_send_server": true, 00:27:40.120 "enable_zerocopy_send_client": false, 00:27:40.120 "zerocopy_threshold": 0, 00:27:40.120 "tls_version": 0, 00:27:40.120 "enable_ktls": false 00:27:40.120 } 00:27:40.120 } 00:27:40.120 ] 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "subsystem": "vmd", 00:27:40.120 "config": [] 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "subsystem": "accel", 00:27:40.120 "config": [ 00:27:40.120 { 00:27:40.120 "method": "accel_set_options", 00:27:40.120 "params": { 00:27:40.120 "small_cache_size": 128, 00:27:40.120 "large_cache_size": 16, 00:27:40.120 "task_count": 2048, 00:27:40.120 "sequence_count": 2048, 00:27:40.120 "buf_count": 2048 00:27:40.120 } 00:27:40.120 } 00:27:40.120 ] 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "subsystem": "bdev", 00:27:40.120 "config": [ 00:27:40.120 { 00:27:40.120 "method": "bdev_set_options", 00:27:40.120 "params": { 00:27:40.120 "bdev_io_pool_size": 65535, 00:27:40.120 "bdev_io_cache_size": 256, 00:27:40.120 "bdev_auto_examine": true, 00:27:40.120 "iobuf_small_cache_size": 128, 00:27:40.120 "iobuf_large_cache_size": 16 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_raid_set_options", 00:27:40.120 "params": { 00:27:40.120 "process_window_size_kb": 1024, 00:27:40.120 "process_max_bandwidth_mb_sec": 0 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_iscsi_set_options", 00:27:40.120 "params": { 00:27:40.120 "timeout_sec": 30 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_nvme_set_options", 00:27:40.120 "params": { 00:27:40.120 "action_on_timeout": "none", 00:27:40.120 "timeout_us": 0, 00:27:40.120 "timeout_admin_us": 0, 00:27:40.120 "keep_alive_timeout_ms": 10000, 00:27:40.120 "arbitration_burst": 0, 00:27:40.120 "low_priority_weight": 0, 00:27:40.120 "medium_priority_weight": 0, 00:27:40.120 "high_priority_weight": 0, 00:27:40.120 "nvme_adminq_poll_period_us": 10000, 00:27:40.120 "nvme_ioq_poll_period_us": 0, 00:27:40.120 "io_queue_requests": 512, 00:27:40.120 "delay_cmd_submit": true, 00:27:40.120 "transport_retry_count": 4, 00:27:40.120 "bdev_retry_count": 3, 00:27:40.120 "transport_ack_timeout": 0, 00:27:40.120 "ctrlr_loss_timeout_sec": 0, 00:27:40.120 "reconnect_delay_sec": 0, 00:27:40.120 "fast_io_fail_timeout_sec": 0, 00:27:40.120 "disable_auto_failback": false, 00:27:40.120 "generate_uuids": false, 00:27:40.120 "transport_tos": 0, 00:27:40.120 "nvme_error_stat": false, 00:27:40.120 "rdma_srq_size": 0, 00:27:40.120 "io_path_stat": false, 00:27:40.120 "allow_accel_sequence": false, 00:27:40.120 "rdma_max_cq_size": 0, 00:27:40.120 "rdma_cm_event_timeout_ms": 0, 00:27:40.120 "dhchap_digests": [ 00:27:40.120 "sha256", 00:27:40.120 "sha384", 00:27:40.120 "sha512" 00:27:40.120 ], 00:27:40.120 "dhchap_dhgroups": [ 00:27:40.120 "null", 00:27:40.120 "ffdhe2048", 00:27:40.120 "ffdhe3072", 00:27:40.120 "ffdhe4096", 00:27:40.120 "ffdhe6144", 00:27:40.120 "ffdhe8192" 00:27:40.120 ] 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_nvme_attach_controller", 00:27:40.120 "params": { 00:27:40.120 "name": "nvme0", 00:27:40.120 "trtype": "TCP", 00:27:40.120 "adrfam": "IPv4", 00:27:40.120 "traddr": "127.0.0.1", 00:27:40.120 "trsvcid": "4420", 00:27:40.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.120 "prchk_reftag": false, 00:27:40.120 "prchk_guard": false, 00:27:40.120 "ctrlr_loss_timeout_sec": 0, 00:27:40.120 "reconnect_delay_sec": 0, 00:27:40.120 "fast_io_fail_timeout_sec": 0, 00:27:40.120 "psk": "key0", 00:27:40.120 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.120 "hdgst": false, 00:27:40.120 "ddgst": false 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_nvme_set_hotplug", 00:27:40.120 "params": { 00:27:40.120 "period_us": 100000, 00:27:40.120 "enable": false 00:27:40.120 } 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "method": "bdev_wait_for_examine" 00:27:40.120 } 00:27:40.120 ] 00:27:40.120 }, 00:27:40.120 { 00:27:40.120 "subsystem": "nbd", 00:27:40.120 "config": [] 00:27:40.120 } 00:27:40.120 ] 00:27:40.120 }' 00:27:40.120 14:29:09 keyring_file -- keyring/file.sh@114 -- # killprocess 1047352 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1047352 ']' 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1047352 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047352 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047352' 00:27:40.120 killing process with pid 1047352 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@967 -- # kill 1047352 00:27:40.120 Received shutdown signal, test time was about 1.000000 seconds 00:27:40.120 00:27:40.120 Latency(us) 00:27:40.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.120 =================================================================================================================== 00:27:40.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:40.120 14:29:09 keyring_file -- common/autotest_common.sh@972 -- # wait 1047352 00:27:40.379 14:29:09 keyring_file -- keyring/file.sh@117 -- # bperfpid=1048810 00:27:40.379 14:29:09 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1048810 /var/tmp/bperf.sock 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1048810 ']' 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:40.379 14:29:09 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:40.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.379 14:29:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:40.379 14:29:09 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:40.379 "subsystems": [ 00:27:40.379 { 00:27:40.379 "subsystem": "keyring", 00:27:40.379 "config": [ 00:27:40.379 { 00:27:40.379 "method": "keyring_file_add_key", 00:27:40.379 "params": { 00:27:40.379 "name": "key0", 00:27:40.379 "path": "/tmp/tmp.zupjIynm7t" 00:27:40.379 } 00:27:40.379 }, 00:27:40.379 { 00:27:40.379 "method": "keyring_file_add_key", 00:27:40.379 "params": { 00:27:40.379 "name": "key1", 00:27:40.379 "path": "/tmp/tmp.LJ6H4yqlly" 00:27:40.379 } 00:27:40.379 } 00:27:40.379 ] 00:27:40.379 }, 00:27:40.379 { 00:27:40.379 "subsystem": "iobuf", 00:27:40.379 "config": [ 00:27:40.379 { 00:27:40.379 "method": "iobuf_set_options", 00:27:40.379 "params": { 00:27:40.379 "small_pool_count": 8192, 00:27:40.379 "large_pool_count": 1024, 00:27:40.379 "small_bufsize": 8192, 00:27:40.379 "large_bufsize": 135168 00:27:40.379 } 00:27:40.379 } 00:27:40.379 ] 00:27:40.379 }, 00:27:40.379 { 00:27:40.379 "subsystem": "sock", 00:27:40.379 "config": [ 00:27:40.379 { 00:27:40.379 "method": "sock_set_default_impl", 00:27:40.379 "params": { 00:27:40.379 "impl_name": "posix" 00:27:40.379 } 00:27:40.379 }, 00:27:40.379 { 00:27:40.379 "method": "sock_impl_set_options", 00:27:40.379 "params": { 00:27:40.379 "impl_name": "ssl", 00:27:40.379 "recv_buf_size": 4096, 00:27:40.379 "send_buf_size": 4096, 00:27:40.379 "enable_recv_pipe": true, 00:27:40.379 "enable_quickack": false, 00:27:40.379 "enable_placement_id": 0, 00:27:40.379 "enable_zerocopy_send_server": true, 00:27:40.379 "enable_zerocopy_send_client": false, 00:27:40.379 "zerocopy_threshold": 0, 00:27:40.379 "tls_version": 0, 00:27:40.379 "enable_ktls": false 00:27:40.379 } 00:27:40.379 }, 00:27:40.379 { 00:27:40.380 "method": "sock_impl_set_options", 00:27:40.380 "params": { 00:27:40.380 "impl_name": "posix", 00:27:40.380 "recv_buf_size": 2097152, 00:27:40.380 "send_buf_size": 2097152, 00:27:40.380 "enable_recv_pipe": true, 00:27:40.380 "enable_quickack": false, 00:27:40.380 "enable_placement_id": 0, 00:27:40.380 "enable_zerocopy_send_server": true, 00:27:40.380 "enable_zerocopy_send_client": false, 00:27:40.380 "zerocopy_threshold": 0, 00:27:40.380 "tls_version": 0, 00:27:40.380 "enable_ktls": false 00:27:40.380 } 00:27:40.380 } 00:27:40.380 ] 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "subsystem": "vmd", 00:27:40.380 "config": [] 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "subsystem": "accel", 00:27:40.380 "config": [ 00:27:40.380 { 00:27:40.380 "method": "accel_set_options", 00:27:40.380 "params": { 00:27:40.380 "small_cache_size": 128, 00:27:40.380 "large_cache_size": 16, 00:27:40.380 "task_count": 2048, 00:27:40.380 "sequence_count": 2048, 00:27:40.380 "buf_count": 2048 00:27:40.380 } 00:27:40.380 } 00:27:40.380 ] 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "subsystem": "bdev", 00:27:40.380 "config": [ 00:27:40.380 { 00:27:40.380 "method": "bdev_set_options", 00:27:40.380 "params": { 00:27:40.380 "bdev_io_pool_size": 65535, 00:27:40.380 "bdev_io_cache_size": 256, 00:27:40.380 "bdev_auto_examine": true, 00:27:40.380 "iobuf_small_cache_size": 128, 00:27:40.380 "iobuf_large_cache_size": 16 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_raid_set_options", 00:27:40.380 "params": { 00:27:40.380 "process_window_size_kb": 1024, 00:27:40.380 "process_max_bandwidth_mb_sec": 0 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_iscsi_set_options", 00:27:40.380 "params": { 00:27:40.380 "timeout_sec": 30 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_nvme_set_options", 00:27:40.380 "params": { 00:27:40.380 "action_on_timeout": "none", 00:27:40.380 "timeout_us": 0, 00:27:40.380 "timeout_admin_us": 0, 00:27:40.380 "keep_alive_timeout_ms": 10000, 00:27:40.380 "arbitration_burst": 0, 00:27:40.380 "low_priority_weight": 0, 00:27:40.380 "medium_priority_weight": 0, 00:27:40.380 "high_priority_weight": 0, 00:27:40.380 "nvme_adminq_poll_period_us": 10000, 00:27:40.380 "nvme_ioq_poll_period_us": 0, 00:27:40.380 "io_queue_requests": 512, 00:27:40.380 "delay_cmd_submit": true, 00:27:40.380 "transport_retry_count": 4, 00:27:40.380 "bdev_retry_count": 3, 00:27:40.380 "transport_ack_timeout": 0, 00:27:40.380 "ctrlr_loss_timeout_sec": 0, 00:27:40.380 "reconnect_delay_sec": 0, 00:27:40.380 "fast_io_fail_timeout_sec": 0, 00:27:40.380 "disable_auto_failback": false, 00:27:40.380 "generate_uuids": false, 00:27:40.380 "transport_tos": 0, 00:27:40.380 "nvme_error_stat": false, 00:27:40.380 "rdma_srq_size": 0, 00:27:40.380 "io_path_stat": false, 00:27:40.380 "allow_accel_sequence": false, 00:27:40.380 "rdma_max_cq_size": 0, 00:27:40.380 "rdma_cm_event_timeout_ms": 0, 00:27:40.380 "dhchap_digests": [ 00:27:40.380 "sha256", 00:27:40.380 "sha384", 00:27:40.380 "sha512" 00:27:40.380 ], 00:27:40.380 "dhchap_dhgroups": [ 00:27:40.380 "null", 00:27:40.380 "ffdhe2048", 00:27:40.380 "ffdhe3072", 00:27:40.380 "ffdhe4096", 00:27:40.380 "ffdhe6144", 00:27:40.380 "ffdhe8192" 00:27:40.380 ] 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_nvme_attach_controller", 00:27:40.380 "params": { 00:27:40.380 "name": "nvme0", 00:27:40.380 "trtype": "TCP", 00:27:40.380 "adrfam": "IPv4", 00:27:40.380 "traddr": "127.0.0.1", 00:27:40.380 "trsvcid": "4420", 00:27:40.380 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.380 "prchk_reftag": false, 00:27:40.380 "prchk_guard": false, 00:27:40.380 "ctrlr_loss_timeout_sec": 0, 00:27:40.380 "reconnect_delay_sec": 0, 00:27:40.380 "fast_io_fail_timeout_sec": 0, 00:27:40.380 "psk": "key0", 00:27:40.380 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.380 "hdgst": false, 00:27:40.380 "ddgst": false 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_nvme_set_hotplug", 00:27:40.380 "params": { 00:27:40.380 "period_us": 100000, 00:27:40.380 "enable": false 00:27:40.380 } 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "method": "bdev_wait_for_examine" 00:27:40.380 } 00:27:40.380 ] 00:27:40.380 }, 00:27:40.380 { 00:27:40.380 "subsystem": "nbd", 00:27:40.380 "config": [] 00:27:40.380 } 00:27:40.380 ] 00:27:40.380 }' 00:27:40.380 [2024-07-25 14:29:10.028221] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:27:40.380 [2024-07-25 14:29:10.028337] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1048810 ] 00:27:40.638 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.638 [2024-07-25 14:29:10.090018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.638 [2024-07-25 14:29:10.200206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.897 [2024-07-25 14:29:10.378919] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.465 14:29:11 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:41.465 14:29:11 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:41.465 14:29:11 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:41.465 14:29:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.465 14:29:11 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:41.723 14:29:11 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:41.723 14:29:11 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:41.723 14:29:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.723 14:29:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.723 14:29:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.723 14:29:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.723 14:29:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.981 14:29:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:41.981 14:29:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:41.981 14:29:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:41.981 14:29:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.981 14:29:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.981 14:29:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.981 14:29:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:42.240 14:29:11 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:42.240 14:29:11 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:42.240 14:29:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:42.240 14:29:11 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:42.498 14:29:12 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:42.498 14:29:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:42.498 14:29:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.zupjIynm7t /tmp/tmp.LJ6H4yqlly 00:27:42.498 14:29:12 keyring_file -- keyring/file.sh@20 -- # killprocess 1048810 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1048810 ']' 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1048810 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1048810 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1048810' 00:27:42.498 killing process with pid 1048810 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@967 -- # kill 1048810 00:27:42.498 Received shutdown signal, test time was about 1.000000 seconds 00:27:42.498 00:27:42.498 Latency(us) 00:27:42.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.498 =================================================================================================================== 00:27:42.498 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:42.498 14:29:12 keyring_file -- common/autotest_common.sh@972 -- # wait 1048810 00:27:42.758 14:29:12 keyring_file -- keyring/file.sh@21 -- # killprocess 1047341 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1047341 ']' 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1047341 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1047341 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1047341' 00:27:42.758 killing process with pid 1047341 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@967 -- # kill 1047341 00:27:42.758 [2024-07-25 14:29:12.319219] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:42.758 14:29:12 keyring_file -- common/autotest_common.sh@972 -- # wait 1047341 00:27:43.326 00:27:43.327 real 0m14.244s 00:27:43.327 user 0m35.609s 00:27:43.327 sys 0m3.306s 00:27:43.327 14:29:12 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.327 14:29:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:43.327 ************************************ 00:27:43.327 END TEST keyring_file 00:27:43.327 ************************************ 00:27:43.327 14:29:12 -- common/autotest_common.sh@1142 -- # return 0 00:27:43.327 14:29:12 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:43.327 14:29:12 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:43.327 14:29:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:43.327 14:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.327 14:29:12 -- common/autotest_common.sh@10 -- # set +x 00:27:43.327 ************************************ 00:27:43.327 START TEST keyring_linux 00:27:43.327 ************************************ 00:27:43.327 14:29:12 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:43.327 * Looking for test storage... 00:27:43.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.327 14:29:12 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.327 14:29:12 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.327 14:29:12 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.327 14:29:12 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.327 14:29:12 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.327 14:29:12 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.327 14:29:12 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:43.327 14:29:12 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:43.327 /tmp/:spdk-test:key0 00:27:43.327 14:29:12 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:43.327 14:29:12 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:43.327 14:29:12 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:43.327 /tmp/:spdk-test:key1 00:27:43.587 14:29:12 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1049169 00:27:43.587 14:29:12 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:43.587 14:29:12 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1049169 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1049169 ']' 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.587 14:29:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.587 [2024-07-25 14:29:13.031665] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:27:43.587 [2024-07-25 14:29:13.031766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049169 ] 00:27:43.587 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.587 [2024-07-25 14:29:13.091329] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.587 [2024-07-25 14:29:13.202244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.846 [2024-07-25 14:29:13.436196] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.846 null0 00:27:43.846 [2024-07-25 14:29:13.468256] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:43.846 [2024-07-25 14:29:13.468738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:43.846 666226014 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:43.846 16158301 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1049307 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:43.846 14:29:13 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1049307 /var/tmp/bperf.sock 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1049307 ']' 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.846 14:29:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:44.104 [2024-07-25 14:29:13.532424] Starting SPDK v24.09-pre git sha1 d3d267b54 / DPDK 24.03.0 initialization... 00:27:44.104 [2024-07-25 14:29:13.532502] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1049307 ] 00:27:44.104 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.104 [2024-07-25 14:29:13.588989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:44.104 [2024-07-25 14:29:13.693488] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.104 14:29:13 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:44.104 14:29:13 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:44.104 14:29:13 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:44.104 14:29:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:44.382 14:29:13 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:44.382 14:29:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.664 14:29:14 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:44.664 14:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:44.924 [2024-07-25 14:29:14.539575] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:45.182 nvme0n1 00:27:45.182 14:29:14 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:45.182 14:29:14 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:45.182 14:29:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:45.182 14:29:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:45.182 14:29:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:45.182 14:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.440 14:29:14 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:45.440 14:29:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:45.440 14:29:14 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:45.440 14:29:14 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:45.440 14:29:14 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.440 14:29:14 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:45.440 14:29:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@25 -- # sn=666226014 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@26 -- # [[ 666226014 == \6\6\6\2\2\6\0\1\4 ]] 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 666226014 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:45.698 14:29:15 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.698 Running I/O for 1 seconds... 00:27:46.632 00:27:46.632 Latency(us) 00:27:46.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.632 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:46.632 nvme0n1 : 1.01 10498.87 41.01 0.00 0.00 12110.75 4174.89 16796.63 00:27:46.632 =================================================================================================================== 00:27:46.632 Total : 10498.87 41.01 0.00 0.00 12110.75 4174.89 16796.63 00:27:46.632 0 00:27:46.632 14:29:16 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:46.632 14:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:46.890 14:29:16 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:46.890 14:29:16 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:46.890 14:29:16 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:46.890 14:29:16 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:46.890 14:29:16 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:46.890 14:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:47.148 14:29:16 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:47.148 14:29:16 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:47.148 14:29:16 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:47.148 14:29:16 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:47.148 14:29:16 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.148 14:29:16 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.406 [2024-07-25 14:29:17.001746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:47.406 [2024-07-25 14:29:17.002117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210030 (107): Transport endpoint is not connected 00:27:47.406 [2024-07-25 14:29:17.003121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210030 (9): Bad file descriptor 00:27:47.406 [2024-07-25 14:29:17.004120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:47.406 [2024-07-25 14:29:17.004140] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:47.406 [2024-07-25 14:29:17.004154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:47.406 request: 00:27:47.406 { 00:27:47.406 "name": "nvme0", 00:27:47.406 "trtype": "tcp", 00:27:47.406 "traddr": "127.0.0.1", 00:27:47.406 "adrfam": "ipv4", 00:27:47.406 "trsvcid": "4420", 00:27:47.406 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.406 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.406 "prchk_reftag": false, 00:27:47.406 "prchk_guard": false, 00:27:47.406 "hdgst": false, 00:27:47.406 "ddgst": false, 00:27:47.406 "psk": ":spdk-test:key1", 00:27:47.406 "method": "bdev_nvme_attach_controller", 00:27:47.406 "req_id": 1 00:27:47.406 } 00:27:47.406 Got JSON-RPC error response 00:27:47.406 response: 00:27:47.406 { 00:27:47.406 "code": -5, 00:27:47.406 "message": "Input/output error" 00:27:47.406 } 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@33 -- # sn=666226014 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 666226014 00:27:47.406 1 links removed 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@33 -- # sn=16158301 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 16158301 00:27:47.406 1 links removed 00:27:47.406 14:29:17 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1049307 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1049307 ']' 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1049307 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.406 14:29:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049307 00:27:47.665 14:29:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:47.665 14:29:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:47.665 14:29:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049307' 00:27:47.665 killing process with pid 1049307 00:27:47.665 14:29:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 1049307 00:27:47.665 Received shutdown signal, test time was about 1.000000 seconds 00:27:47.665 00:27:47.665 Latency(us) 00:27:47.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.665 =================================================================================================================== 00:27:47.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.665 14:29:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 1049307 00:27:47.923 14:29:17 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1049169 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1049169 ']' 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1049169 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1049169 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1049169' 00:27:47.923 killing process with pid 1049169 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@967 -- # kill 1049169 00:27:47.923 14:29:17 keyring_linux -- common/autotest_common.sh@972 -- # wait 1049169 00:27:48.182 00:27:48.182 real 0m4.975s 00:27:48.182 user 0m9.638s 00:27:48.182 sys 0m1.571s 00:27:48.182 14:29:17 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.182 14:29:17 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.182 ************************************ 00:27:48.182 END TEST keyring_linux 00:27:48.182 ************************************ 00:27:48.182 14:29:17 -- common/autotest_common.sh@1142 -- # return 0 00:27:48.182 14:29:17 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:48.182 14:29:17 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:48.182 14:29:17 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:48.182 14:29:17 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:48.182 14:29:17 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:48.182 14:29:17 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:48.182 14:29:17 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:48.182 14:29:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.182 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:27:48.182 14:29:17 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:48.182 14:29:17 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:48.182 14:29:17 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:48.182 14:29:17 -- common/autotest_common.sh@10 -- # set +x 00:27:50.082 INFO: APP EXITING 00:27:50.082 INFO: killing all VMs 00:27:50.082 INFO: killing vhost app 00:27:50.082 INFO: EXIT DONE 00:27:51.458 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:51.458 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:51.458 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:51.458 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:51.458 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:51.458 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:51.458 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:51.458 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:51.458 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:51.458 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:51.458 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:51.458 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:51.458 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:51.458 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:51.458 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:51.458 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:51.458 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:52.834 Cleaning 00:27:52.834 Removing: /var/run/dpdk/spdk0/config 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:52.834 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:52.834 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:52.834 Removing: /var/run/dpdk/spdk1/config 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:52.834 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:52.834 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:52.834 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:52.834 Removing: /var/run/dpdk/spdk2/config 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:52.834 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:52.834 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:52.834 Removing: /var/run/dpdk/spdk3/config 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:52.834 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:52.834 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:52.834 Removing: /var/run/dpdk/spdk4/config 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:52.834 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:52.834 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:52.834 Removing: /dev/shm/bdev_svc_trace.1 00:27:52.834 Removing: /dev/shm/nvmf_trace.0 00:27:52.834 Removing: /dev/shm/spdk_tgt_trace.pid791260 00:27:52.834 Removing: /var/run/dpdk/spdk0 00:27:52.834 Removing: /var/run/dpdk/spdk1 00:27:52.834 Removing: /var/run/dpdk/spdk2 00:27:52.834 Removing: /var/run/dpdk/spdk3 00:27:52.834 Removing: /var/run/dpdk/spdk4 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1010684 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1011094 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1011619 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1012030 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1012616 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1013028 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1013432 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1013847 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1016330 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1016478 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1020269 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1020442 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1022046 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1027718 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1027730 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1030638 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1032051 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1033451 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1034208 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1035727 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1036597 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1041940 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1042266 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1042660 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1044219 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1044618 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1044900 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1047341 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1047352 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1048810 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1049169 00:27:53.094 Removing: /var/run/dpdk/spdk_pid1049307 00:27:53.094 Removing: /var/run/dpdk/spdk_pid789655 00:27:53.094 Removing: /var/run/dpdk/spdk_pid790387 00:27:53.094 Removing: /var/run/dpdk/spdk_pid791260 00:27:53.094 Removing: /var/run/dpdk/spdk_pid791633 00:27:53.094 Removing: /var/run/dpdk/spdk_pid792326 00:27:53.094 Removing: /var/run/dpdk/spdk_pid792467 00:27:53.094 Removing: /var/run/dpdk/spdk_pid793181 00:27:53.094 Removing: /var/run/dpdk/spdk_pid793197 00:27:53.094 Removing: /var/run/dpdk/spdk_pid793441 00:27:53.094 Removing: /var/run/dpdk/spdk_pid794748 00:27:53.094 Removing: /var/run/dpdk/spdk_pid795785 00:27:53.094 Removing: /var/run/dpdk/spdk_pid796096 00:27:53.094 Removing: /var/run/dpdk/spdk_pid796512 00:27:53.094 Removing: /var/run/dpdk/spdk_pid797086 00:27:53.094 Removing: /var/run/dpdk/spdk_pid797304 00:27:53.094 Removing: /var/run/dpdk/spdk_pid797468 00:27:53.094 Removing: /var/run/dpdk/spdk_pid797627 00:27:53.094 Removing: /var/run/dpdk/spdk_pid797807 00:27:53.094 Removing: /var/run/dpdk/spdk_pid798121 00:27:53.094 Removing: /var/run/dpdk/spdk_pid800468 00:27:53.094 Removing: /var/run/dpdk/spdk_pid800636 00:27:53.094 Removing: /var/run/dpdk/spdk_pid800797 00:27:53.094 Removing: /var/run/dpdk/spdk_pid800881 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801238 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801250 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801669 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801678 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801965 00:27:53.094 Removing: /var/run/dpdk/spdk_pid801976 00:27:53.094 Removing: /var/run/dpdk/spdk_pid802146 00:27:53.094 Removing: /var/run/dpdk/spdk_pid802271 00:27:53.094 Removing: /var/run/dpdk/spdk_pid802641 00:27:53.094 Removing: /var/run/dpdk/spdk_pid802798 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803106 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803204 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803308 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803386 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803646 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803802 00:27:53.094 Removing: /var/run/dpdk/spdk_pid803966 00:27:53.094 Removing: /var/run/dpdk/spdk_pid804232 00:27:53.094 Removing: /var/run/dpdk/spdk_pid804396 00:27:53.094 Removing: /var/run/dpdk/spdk_pid804549 00:27:53.094 Removing: /var/run/dpdk/spdk_pid804826 00:27:53.094 Removing: /var/run/dpdk/spdk_pid804981 00:27:53.094 Removing: /var/run/dpdk/spdk_pid805141 00:27:53.094 Removing: /var/run/dpdk/spdk_pid805326 00:27:53.094 Removing: /var/run/dpdk/spdk_pid805571 00:27:53.094 Removing: /var/run/dpdk/spdk_pid805731 00:27:53.094 Removing: /var/run/dpdk/spdk_pid805898 00:27:53.094 Removing: /var/run/dpdk/spdk_pid806161 00:27:53.094 Removing: /var/run/dpdk/spdk_pid806314 00:27:53.094 Removing: /var/run/dpdk/spdk_pid806478 00:27:53.094 Removing: /var/run/dpdk/spdk_pid806752 00:27:53.094 Removing: /var/run/dpdk/spdk_pid806912 00:27:53.094 Removing: /var/run/dpdk/spdk_pid807083 00:27:53.094 Removing: /var/run/dpdk/spdk_pid807364 00:27:53.094 Removing: /var/run/dpdk/spdk_pid807431 00:27:53.094 Removing: /var/run/dpdk/spdk_pid807637 00:27:53.094 Removing: /var/run/dpdk/spdk_pid809711 00:27:53.094 Removing: /var/run/dpdk/spdk_pid812336 00:27:53.094 Removing: /var/run/dpdk/spdk_pid819183 00:27:53.094 Removing: /var/run/dpdk/spdk_pid819600 00:27:53.094 Removing: /var/run/dpdk/spdk_pid822103 00:27:53.094 Removing: /var/run/dpdk/spdk_pid822379 00:27:53.094 Removing: /var/run/dpdk/spdk_pid824894 00:27:53.094 Removing: /var/run/dpdk/spdk_pid829033 00:27:53.094 Removing: /var/run/dpdk/spdk_pid831397 00:27:53.094 Removing: /var/run/dpdk/spdk_pid837687 00:27:53.094 Removing: /var/run/dpdk/spdk_pid842957 00:27:53.094 Removing: /var/run/dpdk/spdk_pid844212 00:27:53.094 Removing: /var/run/dpdk/spdk_pid844883 00:27:53.094 Removing: /var/run/dpdk/spdk_pid855214 00:27:53.094 Removing: /var/run/dpdk/spdk_pid857495 00:27:53.094 Removing: /var/run/dpdk/spdk_pid883625 00:27:53.094 Removing: /var/run/dpdk/spdk_pid886842 00:27:53.094 Removing: /var/run/dpdk/spdk_pid890716 00:27:53.094 Removing: /var/run/dpdk/spdk_pid894514 00:27:53.094 Removing: /var/run/dpdk/spdk_pid894516 00:27:53.094 Removing: /var/run/dpdk/spdk_pid895168 00:27:53.094 Removing: /var/run/dpdk/spdk_pid895714 00:27:53.094 Removing: /var/run/dpdk/spdk_pid896359 00:27:53.094 Removing: /var/run/dpdk/spdk_pid896764 00:27:53.094 Removing: /var/run/dpdk/spdk_pid896770 00:27:53.094 Removing: /var/run/dpdk/spdk_pid897027 00:27:53.094 Removing: /var/run/dpdk/spdk_pid897153 00:27:53.094 Removing: /var/run/dpdk/spdk_pid897165 00:27:53.094 Removing: /var/run/dpdk/spdk_pid897777 00:27:53.351 Removing: /var/run/dpdk/spdk_pid898357 00:27:53.351 Removing: /var/run/dpdk/spdk_pid899015 00:27:53.351 Removing: /var/run/dpdk/spdk_pid899417 00:27:53.351 Removing: /var/run/dpdk/spdk_pid899419 00:27:53.351 Removing: /var/run/dpdk/spdk_pid899679 00:27:53.351 Removing: /var/run/dpdk/spdk_pid900682 00:27:53.351 Removing: /var/run/dpdk/spdk_pid901812 00:27:53.351 Removing: /var/run/dpdk/spdk_pid907230 00:27:53.351 Removing: /var/run/dpdk/spdk_pid931776 00:27:53.351 Removing: /var/run/dpdk/spdk_pid934560 00:27:53.351 Removing: /var/run/dpdk/spdk_pid935739 00:27:53.351 Removing: /var/run/dpdk/spdk_pid937037 00:27:53.351 Removing: /var/run/dpdk/spdk_pid937073 00:27:53.351 Removing: /var/run/dpdk/spdk_pid937211 00:27:53.351 Removing: /var/run/dpdk/spdk_pid937348 00:27:53.351 Removing: /var/run/dpdk/spdk_pid937782 00:27:53.351 Removing: /var/run/dpdk/spdk_pid938983 00:27:53.351 Removing: /var/run/dpdk/spdk_pid939706 00:27:53.351 Removing: /var/run/dpdk/spdk_pid940129 00:27:53.351 Removing: /var/run/dpdk/spdk_pid941746 00:27:53.351 Removing: /var/run/dpdk/spdk_pid942058 00:27:53.351 Removing: /var/run/dpdk/spdk_pid942614 00:27:53.351 Removing: /var/run/dpdk/spdk_pid945132 00:27:53.351 Removing: /var/run/dpdk/spdk_pid951164 00:27:53.351 Removing: /var/run/dpdk/spdk_pid953959 00:27:53.351 Removing: /var/run/dpdk/spdk_pid958335 00:27:53.351 Removing: /var/run/dpdk/spdk_pid959273 00:27:53.351 Removing: /var/run/dpdk/spdk_pid960362 00:27:53.351 Removing: /var/run/dpdk/spdk_pid962930 00:27:53.351 Removing: /var/run/dpdk/spdk_pid965290 00:27:53.351 Removing: /var/run/dpdk/spdk_pid969500 00:27:53.351 Removing: /var/run/dpdk/spdk_pid969502 00:27:53.351 Removing: /var/run/dpdk/spdk_pid972311 00:27:53.351 Removing: /var/run/dpdk/spdk_pid972529 00:27:53.351 Removing: /var/run/dpdk/spdk_pid972671 00:27:53.351 Removing: /var/run/dpdk/spdk_pid972935 00:27:53.351 Removing: /var/run/dpdk/spdk_pid972940 00:27:53.351 Removing: /var/run/dpdk/spdk_pid975714 00:27:53.351 Removing: /var/run/dpdk/spdk_pid976052 00:27:53.351 Removing: /var/run/dpdk/spdk_pid978705 00:27:53.351 Removing: /var/run/dpdk/spdk_pid980572 00:27:53.351 Removing: /var/run/dpdk/spdk_pid983979 00:27:53.351 Removing: /var/run/dpdk/spdk_pid987435 00:27:53.351 Removing: /var/run/dpdk/spdk_pid994295 00:27:53.351 Removing: /var/run/dpdk/spdk_pid998747 00:27:53.351 Removing: /var/run/dpdk/spdk_pid998758 00:27:53.351 Clean 00:27:53.351 14:29:22 -- common/autotest_common.sh@1451 -- # return 0 00:27:53.351 14:29:22 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:53.351 14:29:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.351 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:27:53.351 14:29:22 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:53.351 14:29:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.351 14:29:22 -- common/autotest_common.sh@10 -- # set +x 00:27:53.351 14:29:22 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:53.351 14:29:22 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:53.351 14:29:22 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:53.351 14:29:22 -- spdk/autotest.sh@391 -- # hash lcov 00:27:53.351 14:29:22 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:53.351 14:29:22 -- spdk/autotest.sh@393 -- # hostname 00:27:53.351 14:29:22 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:53.608 geninfo: WARNING: invalid characters removed from testname! 00:28:25.671 14:29:50 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:25.671 14:29:54 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:28.195 14:29:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:31.471 14:30:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:34.029 14:30:03 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:37.310 14:30:06 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:40.598 14:30:09 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:40.598 14:30:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.598 14:30:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:40.598 14:30:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.598 14:30:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.598 14:30:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.598 14:30:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.598 14:30:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.598 14:30:09 -- paths/export.sh@5 -- $ export PATH 00:28:40.598 14:30:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.598 14:30:09 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:40.598 14:30:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:40.598 14:30:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721910609.XXXXXX 00:28:40.598 14:30:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721910609.mkVJOg 00:28:40.598 14:30:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:40.598 14:30:09 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:40.598 14:30:09 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:40.598 14:30:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:40.598 14:30:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:40.598 14:30:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:40.598 14:30:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:40.598 14:30:09 -- common/autotest_common.sh@10 -- $ set +x 00:28:40.598 14:30:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:40.598 14:30:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:40.598 14:30:09 -- pm/common@17 -- $ local monitor 00:28:40.598 14:30:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.598 14:30:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.598 14:30:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.598 14:30:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.598 14:30:09 -- pm/common@21 -- $ date +%s 00:28:40.598 14:30:09 -- pm/common@21 -- $ date +%s 00:28:40.598 14:30:09 -- pm/common@25 -- $ sleep 1 00:28:40.598 14:30:09 -- pm/common@21 -- $ date +%s 00:28:40.598 14:30:09 -- pm/common@21 -- $ date +%s 00:28:40.598 14:30:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721910609 00:28:40.598 14:30:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721910609 00:28:40.598 14:30:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721910609 00:28:40.599 14:30:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721910609 00:28:40.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721910609_collect-vmstat.pm.log 00:28:40.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721910609_collect-cpu-load.pm.log 00:28:40.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721910609_collect-cpu-temp.pm.log 00:28:40.599 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721910609_collect-bmc-pm.bmc.pm.log 00:28:41.168 14:30:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:41.168 14:30:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:41.168 14:30:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.168 14:30:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:41.168 14:30:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:41.168 14:30:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:41.168 14:30:10 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:41.168 14:30:10 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:41.168 14:30:10 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:41.168 14:30:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:41.168 14:30:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:41.168 14:30:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:41.168 14:30:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:41.168 14:30:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:41.168 14:30:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:41.168 14:30:10 -- pm/common@44 -- $ pid=1059542 00:28:41.168 14:30:10 -- pm/common@50 -- $ kill -TERM 1059542 00:28:41.168 14:30:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:41.169 14:30:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:41.169 14:30:10 -- pm/common@44 -- $ pid=1059544 00:28:41.169 14:30:10 -- pm/common@50 -- $ kill -TERM 1059544 00:28:41.169 14:30:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:41.169 14:30:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:41.169 14:30:10 -- pm/common@44 -- $ pid=1059546 00:28:41.169 14:30:10 -- pm/common@50 -- $ kill -TERM 1059546 00:28:41.169 14:30:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:41.169 14:30:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:41.169 14:30:10 -- pm/common@44 -- $ pid=1059570 00:28:41.169 14:30:10 -- pm/common@50 -- $ sudo -E kill -TERM 1059570 00:28:41.169 + [[ -n 705797 ]] 00:28:41.169 + sudo kill 705797 00:28:41.179 [Pipeline] } 00:28:41.199 [Pipeline] // stage 00:28:41.205 [Pipeline] } 00:28:41.222 [Pipeline] // timeout 00:28:41.228 [Pipeline] } 00:28:41.245 [Pipeline] // catchError 00:28:41.251 [Pipeline] } 00:28:41.270 [Pipeline] // wrap 00:28:41.277 [Pipeline] } 00:28:41.294 [Pipeline] // catchError 00:28:41.305 [Pipeline] stage 00:28:41.308 [Pipeline] { (Epilogue) 00:28:41.323 [Pipeline] catchError 00:28:41.326 [Pipeline] { 00:28:41.342 [Pipeline] echo 00:28:41.344 Cleanup processes 00:28:41.350 [Pipeline] sh 00:28:41.637 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.638 1059677 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:41.638 1059806 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.652 [Pipeline] sh 00:28:41.940 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.940 ++ grep -v 'sudo pgrep' 00:28:41.940 ++ awk '{print $1}' 00:28:41.940 + sudo kill -9 1059677 00:28:41.951 [Pipeline] sh 00:28:42.236 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:50.386 [Pipeline] sh 00:28:50.673 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:50.673 Artifacts sizes are good 00:28:50.690 [Pipeline] archiveArtifacts 00:28:50.698 Archiving artifacts 00:28:50.918 [Pipeline] sh 00:28:51.206 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:51.222 [Pipeline] cleanWs 00:28:51.233 [WS-CLEANUP] Deleting project workspace... 00:28:51.233 [WS-CLEANUP] Deferred wipeout is used... 00:28:51.241 [WS-CLEANUP] done 00:28:51.243 [Pipeline] } 00:28:51.265 [Pipeline] // catchError 00:28:51.279 [Pipeline] sh 00:28:51.560 + logger -p user.info -t JENKINS-CI 00:28:51.569 [Pipeline] } 00:28:51.588 [Pipeline] // stage 00:28:51.595 [Pipeline] } 00:28:51.614 [Pipeline] // node 00:28:51.620 [Pipeline] End of Pipeline 00:28:51.654 Finished: SUCCESS